174
this post was submitted on 25 Apr 2025
174 points (95.8% liked)
Technology
69247 readers
3821 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say "meaning?" and see how they respond.
Pretty sure most of mine will just make up a bullshit response nd go along with what I'm saying unless I give them more context.
There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.
My friends would probably say something like "I've never heard that one, but I guess it means something like ..."
The problem is, these LLMs don't give any indication when they're making stuff up versus when repeating an incontrovertible truth. Lots of people don't understand the limitations of things like Google's AI summary* so they will trust these false answers. Harmless here, but often not.
* I'm not counting the little disclaimer because we've been taught to ignore smallprint from being faced with so much of it
Ok, but the point is that lots of people would just say something and then figure out if it's right later.
Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.
LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and "hallucinations" are a real problem when they lead to real decisions and real consequences.
If you can't imagine why this is bad, maybe read some Kafka or watch some Black Mirror.
Lmfao. Yeah, ok, let's get my predictions from the depressing show dedicated to being relentlessly pessimistic at every single decision point.
And yeah, like I said, you sound like my hysterical middle school teacher claiming that Wikipedia will be society's downfall.
Guess what? It wasn't. People learn that tools are error prone and came up with strategies to use them while correcting for potential errors.
Like at a fundamental, technical level, components of a system can be error prone, but still be useful overall. Quantum calculations have inherent probabilities and errors in them, but they can still solve some types of calculations so much faster than normal computers that you can run the same calculation 100x on a Quantum Computer, average out the results to remove the outlying errors, and get to the right answer far faster than a classical computer.
Computer chips in satellites and the space station are constantly have random bits of memory flipped by cosmic rays but they still work fine because their RAM is special, error correcting ram, that can use similar methods to verify and check for errors.
Designing for error correction is a thing, and people are perfectly capable of doing so in their personal lives.