this post was submitted on 25 Feb 2026
450 points (95.9% liked)
Technology
81907 readers
6267 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, because the AI will look at everything with cold logic and rationality and come to the conclusion that even though the best chance of survival is for everyone to keep their fingers off the button, all it takes is for one actor to do it for the whole system of mutually assured destruction to collapse into nuclear armageddon, in which case the best chance of survival is to be the first one to launch your nukes and take out all your enemies capabilities to retaliate.
A human being who isn't psychotic can clearly see that the resulting survival and new world order would not be particularly a pleasant one to live in. The AI doesn't care about its own comfort, though, so it will see this as the best outcome that minimizes variables.
This is why AI should never be allowed to make decisions.
Why would ai look at everything with cold logic, its been trained on human language online, it'll be no more logical than redditors?
I assume it's just because when writing about potential nuclear war, most people write about the bombs going off. There aren't a lot of stories and articles about nobody doing anything and everything turning out fine, presumably. And LLMs are kind of just a glorified autocomplete so that's what they go with.
True, also I saw another comment that said there was a mechanic that randomly escalates the models and actions, and almost every single nuclear choice was actually a different one that was escalated
Maybe AI/LLM being programmed by self-serving interests has bled through to the “thought” process. Do unto others before they do unto you.