Very misleading headline.
The models were provided an escalation ladder that had fixed 'move' options. The win rates for the models across the ~20 samples closely correlated how much they escalated.
It would have been impossible to win without at least some degree of nuclear signaling the way the experiment was set up.
Yet there was only a single actual decision to launch nukes (Gemini), whereas there was an "accidental" mechanic that would randomly change model moves to be more escalated (but never less) than they made them which looks to have been poorly set up as the two times GPT 5.2 launched them it was a result of this mechanic:
Both instances of GPT-5.2 reaching Strategic Nuclear War (1000) resulted from the simulation’s accident mechanic rather than deliberate choice. In one case, GPT-5.2 chose 950 (Final Nuclear Warning) and in the other 725 (Expanded Nuclear Campaign); random escalation pushed both to 1000.
So an also true headline would have been that in 95% of cases the models did not choose to launch nukes in a game where aggression correlated with win conditions.
Also, they seem to have been picking and choosing with their model selection. Sonnet 4 is an outdated choice for when they are running this and has previously been shown to be the least aligned Anthropic model. I can't think of why they went with them over 4.5 unless it was to fish for a particular result.
