The first problem is the name. It's NOT artificial intelligence, it's artificial stupidity.
People BOUGHT intelligence but GOT stupidity.
This is a most excellent place for technology news and articles.
The first problem is the name. It's NOT artificial intelligence, it's artificial stupidity.
People BOUGHT intelligence but GOT stupidity.
Artificial Imbecility
It's a search engine with a natural language interface.
An unreliable search engine that lies
It obfuctates its sources, so you don't know if the answer to your question is coming from a relevant expert, or the dankest corners of reddit...it all sounds the same after it's been processed by a hundred billion GPUs!
This is what I try to explain to people but they just see it as a Google thats always correct
People will accept either intelligence or stupidity. They will pay for a flattering sycophant.
Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere
Seems to be behind a Google form?
https://docs.google.com/forms/d/e/1FAIpQLSc8rU8OpQWU44gYDeZyINUZjBFwu--1uTbxixK_PRSVrfaH8Q/viewform
STOP CALCULATING KEEP SHOVELING
Wonder if the 5% that actually made money included companies that sell enterprise AI services, like AWS, Microsoft, and Google?
Nvidia?
It's also making people deskill.
https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
But surely the next 30 billion they are going to burn will get it right!
I think there are real productivity gains to be had but the vast majority are probably leaning into the idea of replacing people too much. It helps me do my job but I'm still the decision maker and I need to review the outputs. I'm still accountable for what AI gives me so I'm not willing to blindly pass that stuff forward.
Yeah. The dunning kruger effect is a real problem here.
I saw a meme saying something like, gen AI is a real expert in everything but completely clueless about my area of specialisation.
As in... it generates plausible answers that seem great but they're just terrible answers.
I'm a consultant I'm in a legal adjacent field. 20 years deep. I've been using a model from hugging face over the last few months.
It can save me time by generating a lot of boiler plate with references et cetera. However it very regularly overlooks critically important components. If I didnt know about these things then I wouldn't know it was missing from the answer.
So really, it cant help you be more knowledgeable, it can only support you at your existing level.
Additionally, for complex / very specific questions, it's just a confidently incorrect failure. It sucks that it cant tell you how confident it is with a given answer.
I have no proof, but I feel like the AI push and Turnip getting re-elected and his regression of the EPA rules sounds like this whole AI thing was an excuse to burn more fossil fuels.
If I was invested in AI, and considering AI's thirst for electricity, I would absolutely make a similar investment in energy. That way, as the AI server farms suck up the electricity I would get at least some of that money back from the energy market.
We're now at the "if you don't, your competitor will". So you really have no choice. There are people that don't use Google anymore and just use chatgpt for all questions.
As programmer. It’s helping my productivity. And look I am SDET in theory I will be the first to go, and I tried to make an agent doing most of my job, but it always things to correct.
But programming requires a lot of boilerplate code, using an agent to make boilerplate files so I can correct and adjust is speeding up a lot what I do.
I don’t think I can replaced so far, but my team is not looking to expand the team right now because we are doing more work.
Does anybody have the original study? I tried to find it but the link is dead ( looks like NANDA pulled it )