Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
view the rest of the comments
That depends on how hardcore of a fatalist you are.
If you're purely a fatalist, then free will is an illusion, laws and punishment are immoral, consciousness is meaningless, and we nothing more than deterministic pattern matching machines, making us only different from LLMs in the details of our implementation and from the terrible optimization that evolution is known for.
But if you believe in some degree of free will, or you think there is value in consciousness, then we differ because LLMs are just auto-complete. They psudo-randomly choose from a weighted list of statistically likely words (actually token) that would come next given the context (which is the conversation history and prompt). There is no free will, no understanding any more than the man in the Chinese room understands Mandarin.
The whole conversation is so full of charged words because the LLM providers have intentionally anthropomorphized LLMs in their marketing, by using words like "reasoning". The APIs from before LLMs blew up provide a far less emotionally charged description of what LLMs do, with terms like "completions".
You wouldn't compare a human mind to your phone keyboard word prediction, but it's doing the same thing but scaled down. Where do you draw the line?
Isn't that sorta what humans do? Picking words based on the ones used before, taking into consideration the context of the conversation?
Only if, like I said, you're a hardcore fatalist.
Not really. When asked a question, a human would think about the answer, and construct a sentence to try and express that point. An LLM doesn't know what the answer is ahead of time, it's not working towards a point, it's just statistically guessing the next couple of letters over and over again. The human equivalent would just be making random mouth noises and hoping the other person interprets them as words