this post was submitted on 08 Jan 2026
601 points (99.7% liked)
Technology
78511 readers
3110 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If we go by personal experience, we recently had the time of several people wasted troubleshooting an issue for a very well known commercial Java app server. The AI overview hallucinated a fake system property for addressing an issue we had.
The person that proposed the change neglected to mention they got it from AI until someone noticed the setting did not appear anywhere in the official system properties documented by the vendor. Now their personal reputation is that they should not be trusted and they seem lazy on top of it because they could not use their eyes to read a one page document.
That’s a very interesting insight. Maybe the amount of hallucination depends on whether the “knowledge” was loaded in form of a prompt vs training data? In the experience I’m talking about there’s no hallucination at all, but there are wrong conclusions and hypotheses sometimes, especially with really tricky bugs. But that’s normal, the really tricky edge cases is probably not something I’d expect to find on SO anyway…