this post was submitted on 07 Jan 2026
847 points (97.7% liked)
Technology
78482 readers
2563 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments

As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don't spread this misconception.
Edit: a typo
No thinking is not the same as no actions, we had bots in games for decades and that bots look like they act reasonably but there never was any thinking.
I feel like ‘a lot of agency’ is wrong as there is no agency, but it doesn't mean that an LLM in a looped setup can't arrive to these actions and perform them. It doesn't require neither agency, nor thinking
You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?
If we're concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.
^source: Wondermark^
Bad bot
Reinforcement learning
Bazzinga
That's leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on "letting go of a stone makes it fall to the ground" will not be able to predict what "letting go of a stick" will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.
Well that seems like a pretty easy hypothesis to test. Why don't you log on to chatgpt and ask it what will happen if you let go of a helium balloon? Your hypothesis is it'll say the balloon falls, so prove it.
This is not the gotcha that you think it is. Now stop wasting my time.
that's quite dishonest because LLMs have had all manner of facts pre-trained on it with datacenters all over the world catering to it. If you think it can learn in the real world without many many iterations and it still needs pushing and proding on simple tasks that humans perform then I am not convinced.
It's like saying a chess playing computer program like stockfish is a good indicator of intelligence because it knows to play chess but you forgot that the human chess players' expertise was used to train it and understand what makes a good chess program.
That's the thing with our terminology, we love to anthropomorphize things. It wasn't a big problem before because most people had enough grasp on reality to understand that when a script makes :-) smile when the result is positive, or :-( smile otherwise, there is no actual mind behind it that can be happy or sad. But now the generator makes convincing enough sequence of words, so people went mad, and this cute terminology doesn't work anymore.