680
AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns
(www.irishtimes.com)
This is a most excellent place for technology news and articles.
I can generally agree with this, but I think a lot of people overestimate where it DOES belong.
For example, you'll see a lot of tech bros talking about how AI is great at replacing artists, but a bunch of artists who know their shit can show you every possible way this just isn't as good as human-made works, but those same artists might say that AI is still incredibly good at programming... because they're not programmers.
Totally. After all, it's built on a similar foundation to existing spellcheck systems: predict the likely next word. It's good as a thesaurus too. (e.g. "what's that word for someone who's full of themselves, self-centered, and boastful?" and it'll spit out "egocentric")
Only for very basic, common, or broad issues. LLMs generally sound very confident, and provide answers regardless of if there's actually a strong source. Plus, they tend to ignore the context of where they source information from.
For example, if I ask it how to change X setting in a niche piece of software, it will often just make up an entire name for a setting or menu, because it just... has to say something that sounds right, since the previous text was "Absolutely! You can fix x by..." and it's just predicting the most likely term, which isn't going to be "wait, nevermind, sorry I don't think that's a setting that even exists!", but a made up name instead. (this is one of the reasons why "thinking" versions of models perform better, because the internal dialogue can reasonably include a correction, retraction, or self-questioning)
It will pull from names and text of entirely different posts that happened to display on the page it scraped, make up words that never appeared on any page, or infer a meaning that doesn't actually exist.
But if you have a more common question like "my computer is having x issues, what could this be?" it'll probably give you a good broad list, and if you narrow it down to RAM issues, it'll probably recommend you MemTest86.
As someone else already mentioned, this is mostly just because Google deliberately made search worse. Other search engines that haven't enshittified, like the one I use (Kagi), tend to give much better results than Google, without you needing to use AI features at all.
On that note though, there is actually an interesting trend where AI models tend to pick lower-ranked, less SEO-optimized pages as sources, but still tend to pick ones with better information on average. It's quite interesting, though I'm no expert on that in particular and couldn't really tell you why other than "it can probably interpret the context of a page better than an algorithm made to do it as quickly as possible, at scale, returning 30 results in 0.3 seconds, given all the extra computing power and time."
Agreed.
I find that people only think its good when using it for something they dont already know, so then they believe everything it says. Catch 22. When they use it for something they already know, its very easy to see how it lies and makes up shit because its a markov chain on steroids and is not impressive in any way. Those billions could have housed and fed every human in a starving country but instead we have the digital equivalent of funko pop minions.
I also find in daily life those who use it and brag about it are 95% of the time the most unintelligent people i know.
Note this doesnt apply to machine learning.