this post was submitted on 16 Jul 2025
57 points (77.7% liked)

Ask Lemmy

33301 readers
1318 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

To elaborate a little:

Since many people are unable to tell the difference between a "real human" and an AI, they have been documented "going rogue" and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can't see AI as less than human at this point because of those points.

When I think about this, I think about that being the reason as to why we cannot create so called "AGI" because we have no proper example or understanding to create it and thus created what we knew. Us.

The "hallucinating" is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.

I think we really don't want to accept what we have already accomplished because we don't like looking into that mirror and seeing how simple our logical process' are mechanically speaking.

you are viewing a single comment's thread
view the rest of the comments
[–] NaibofTabr@infosec.pub 9 points 1 day ago (3 children)

The term "hallucinate" is a euphemism being pushed by the AI peddlers.

It's a computer program. It doesn't "hallucinate", it has errors.

In all cases of ML models being sold by companies, what you are actually looking at is poorly tested software that is not fit for purpose, and has far less actual capability then what the marketing promises.

"Hallucination" in the context of LLMs is marketing bullshit designed to deflect from the reality that none of these programs have been properly quality checked and are extremely error prone.

If Excel gave bad answers for calculations 20% of the time it wouldn't be "hallucinating", it would just be broken, buggy software that requires more development time before distribution as a useful product.

[–] Opinionhaver@feddit.uk 11 points 1 day ago (1 children)

LLM “hallucinations” are only errors from a user expectations perspective. The actual purpose of these models is to generate natural-sounding language, not to provide factual answers. We often forget that - they were never designed as knowledge engines or reasoning tools.

The fact that they often get things right isn’t because they “know” anything - it’s a side effect of being trained on data that contains a lot of correct information. So when they get things wrong, it’s not a bug in the traditional sense - it’s just the model doing what it was designed to do: predict likely word sequences, not truth. Calling that a “hallucination” isn’t marketing spin - it’s a useful way to describe confident output that isn’t grounded in reality.

[–] Paradachshund@lemmy.today 2 points 1 day ago

Whatever they were designed for, they are currently being sold as the solution to nearly every problem. You can't expect a layperson to look further than that, and it's completely reasonable to judge what they do against the claims being used to sell them.

You can blame the marketing departments, but that original purpose you mention is no longer a major talking point (even if it should be).

[–] vrighter@discuss.tchncs.de 7 points 1 day ago

tfney aren't even errors. They are the system working as designed. The system is designed with randomness in mind so that the model can hallucinate, intentionally. The system can't ever be made reliable, not without some sort of paradigm shift.

[–] quediuspayu@lemmy.dbzer0.com 2 points 1 day ago

They call it hallucination because the first guy didn't know the word fabulation.