Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
While the mechanisms of hallucination aren’t the same, it can absolutely and does happen with humans. Somebody ever tell you something they thought to be true and it wasn’t? I’m sure you’ve even done it yourself. Maybe later you realize that it might not be true or you got something confused with something else in your fleshy Rolodex.
Lies are deliberate, hallucinations are mistakes. Talking about lying AIs implies that they have something akin to free will or an intrinsic motivation, which they simply do not have. They are an emotionless tool designed to give good answers. It is comparable to claiming the mechanism for generating speech in the human brain comes up with lies, which it obviously doesn't, it just articulates them.
I'm not saying humans can't hullacinate, I am saying it is not the same as lying and that AIs can't lie.
Agreed
This is pretty much exactly how I explain hallucinations to people. In addition, I posited that maybe LLMs are more human than we think, since we pretty much make up shit all the time. xD
Haha there are definitely some people that might be dumber than LLM’s.
This conversation often gets us to R. Penrose's "consciousness is not computational", from which we can retrace our steps with a separation of algorithmic processes. Is GenAI similar to the "stream-of-thought"? Perhaps, but does that lead to intelligence?
Exactly.
People always wanna classify AI as super smart or super dumb, similar to the human brain or randomly guessing words and doing an ok job. But that is very subjective and it's sliding a little fader between two points that differ in definition slightly for every person.
If we actually wanted to approach the question of "how intelligent are AIs compared to humans" we would need to write a lot of word definitions first, and I'm sure the answer at the end will be just as helpful as a shoulder shrug and an unenthusiastic "about half as intelligent". And that's why these comparisons are stupid.
If AI is a good tool to us, great. If not, alright let's skip and go straight to the next bigger discovery, and stop getting hung up on semantics.
LLMs are not the final state to get to real intelligence. It’s a stepping stone.
A pebble
meaning intelligence is so far away, or it’s not a big step technologically?
I want to point out that before calling them hallucinations we already had a word to describe that, fabulations.
You could say they put the con in confabulation.
That would definitely sell better