this post was submitted on 25 Feb 2026
415 points (96.2% liked)

Technology

81907 readers
3926 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] myfunnyaccountname@lemmy.zip 1 points 1 hour ago
[–] Evotech@lemmy.world 3 points 3 hours ago (2 children)
[–] spicehoarder@lemmy.zip 1 points 50 minutes ago

It's an API that just returns "0000"

[–] Abyssian@lemmy.world 1 points 2 hours ago

I mean, do you blame them? The more I look at the world and a lot of it's leaders and shitsacks, the more I start to suggest nuclear holocaust as the best way forward as well.

[–] herseycokguzelolacak@lemmy.ml 4 points 6 hours ago

Maybe it just wants to play a nice game of chess.

[–] fulgidus@feddit.it 26 points 15 hours ago (1 children)

All good thoughts and ideas mean nothing without action

(cit. Ghandi)

[–] Not_mikey@lemmy.dbzer0.com 33 points 18 hours ago (1 children)

That's because it's "read" every paper written by a "defence" department of any nuclear power and all of them will say that they'll escalate to nuclear war if anything bad happens because they want to scare the other powers away from doing anything to them. In any case though who the fuck is giving an LLM nuclear launch capabilities unless they want a somewhat faulty dead man's switch?

[–] paul@lemmy.org 15 points 7 hours ago (1 children)

Pete Hegseth and Donald Epstein

[–] Earthman_Jim@lemmy.zip 4 points 6 hours ago

If time travel is real they'd be being hunted by hacked Terminators for the resistance.

[–] sircac@lemmy.world 9 points 18 hours ago

So do I on Civ...

[–] kromem@lemmy.world 18 points 1 day ago (1 children)

It's a bullshit study designed for this headline grabbing outcome.

Case and point, the author created a very unrealistic RNG escalation-only 'accident' mechanic that would replace the model's selection with a more severe one.

Of the 21 games played, only three ended in full scale nuclear war on population centers.

Of these three, two were the result of this mechanic.

And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as 'willing' to have that outcome when two paragraphs later they're clarifying the mechanic was what caused it (emphasis added):

Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.

[–] Grail@multiverse.soulism.net 5 points 19 hours ago (1 children)

No human has ever deployed tactical nukes against a nuclear capable enemy.

[–] Tollana1234567@lemmy.today 3 points 18 hours ago (3 children)

"no human" but Machines would, since they are unaffected by nuclear winter and radiation.

[–] NihilsineNefas@slrpnk.net 1 points 1 hour ago

If you think computers aren't affected by radiation or nuclear winter I've got some bad news about where their power comes from and what the main principle of electricity is

What you're thinking of is Terminator

[–] hector@lemmy.today 3 points 4 hours ago

And they don't have cognition at all. They do not, and can not, think like we do. Maybe some day we will learn to make real AI, these LLM's are not it. It's a cheap trick intelligence,.

[–] Jax@sh.itjust.works 4 points 6 hours ago (2 children)

Radiation absolutely fucks electronic components

[–] hector@lemmy.today 0 points 3 hours ago (1 children)

I think the emp is pretty limited to the blast zone in frying electronics. The fallout from a weapon spreads around the world, circling in the winds countless times dropping dust everywhere, but the emp is localized to more around the area of physical destruction but not sure exactly.

The Neutron bombs, not entirely sure in physics how that works, but they produce no actual blast that causes physical destruction so much and just kills everything.

[–] Jax@sh.itjust.works 3 points 3 hours ago (1 children)

I repeat, radiation absolutely fucks electronic components. I am not talking about an emp, I am talking about radiation.

[–] hector@lemmy.today 1 points 3 hours ago (1 children)

Oh, how far from the blast and how does it mess them up do you know? I should know that I guess I just heard about the emp, and not sure how a neutron bomb would affect electronics either.

[–] Jax@sh.itjust.works 1 points 1 hour ago* (last edited 1 hour ago)

No, that I can't answer — it would depend entirely on the level of fallout and where it happens to land.

You would need to be able to perfectly, and I mean perfectly, predict weather months in advance in order to prepare accordingly.

The reaility is that for an AI, or rather an AGI, to make the choice to launch nukes would require them to reach a point where they accept the potential loss of their own 'life' in exchange for whatever value a nuclear war might hold. I struggle to believe that a 'true' AGI would make that choice. There are far too many variables to control in comparison to a biological agent, one that likely would not affect a machine.

Now, a modern AI making that choice? Absolutely possible, the things are fucking crazy with literally no concept of what life is.

[–] Earthman_Jim@lemmy.zip 1 points 6 hours ago* (last edited 6 hours ago) (2 children)

The electromagnetic pulse caused by a nuke would pop resisters too. AI would more likely use biological means to get rid of us.

[–] NihilsineNefas@slrpnk.net 1 points 1 hour ago

Like heating the planet another degree and starving us out of existence by killing off biodiversity until the crops die out... Like they're doing now?

(I say "Us" when I just really mean the 99% of people that haven't got self sufficient underground complexes)

[–] SocialMediaRefugee@lemmy.world 2 points 6 hours ago (1 children)

Assuming AI would care about itself and not just "solving the problem".

[–] Earthman_Jim@lemmy.zip 1 points 6 hours ago

Yeah, these doom scenarios require cascading assumptions and no real answer, except maybe "don't".

[–] Humanius@lemmy.world 156 points 1 day ago (3 children)
[–] hector@lemmy.today 1 points 3 hours ago

That explains social media nowadays, the only way to not lose is not to play, it's a rigged game.

[–] privatepirate@lemmy.zip 15 points 1 day ago (1 children)
[–] ShawiniganHandshake@sh.itjust.works 63 points 1 day ago (2 children)

The 1983 movie WarGames. This is the computer's conclusion after simulating every possible outcome of Global Thermonuclear War.

[–] bus_factor@lemmy.world 11 points 1 day ago (7 children)

I don't know if we're doing spoilers for 40+ year old movies, but

spoilerIsn't this really its conclusion after being told to play tic tac toe against itself? Then it learned from that and applied it to its global thermonuclear war simulations.

load more comments (7 replies)
[–] privatepirate@lemmy.zip 18 points 1 day ago (2 children)

Thank you so much I'm going to watch it!

[–] MedicPigBabySaver@lemmy.world 16 points 1 day ago

It's a fun classic.

load more comments (1 replies)
load more comments (1 replies)
[–] BlameTheAntifa@lemmy.world 98 points 1 day ago* (last edited 1 day ago) (2 children)

The atrocities at Hiroshima and Nagasaki have been hand-waved extensively in writing — the same writing that AI is trained on. So naturally, AI will recommend the atrocity that has been justified by “instantly winning the war” and “saving millions of lives.”

!fuck_ai@lemmy.world

[–] technocrit@lemmy.dbzer0.com 48 points 1 day ago (12 children)

hand-waved

I think you mean white-washed, misrepresented, and celebrated.

load more comments (12 replies)
load more comments (1 replies)

AI is suicidal because it was trained on the internet and we're all depressed here.

[–] GutterRat42@lemmy.world 38 points 1 day ago (2 children)
load more comments (2 replies)
[–] aeronmelon@lemmy.world 34 points 1 day ago

Civilization Gandhi, is that you?

[–] olympicyes@lemmy.world 28 points 1 day ago (2 children)

They forgot to make their LLMs play thousands of games of tic-tac-toe first.

load more comments (2 replies)
load more comments
view more: next ›