‘Nuclear taboo’ ignored as trigger-happy AI turns to atomic weapons, chilling study finds
When humans debate nuclear war, the conversation is shaped by history, trauma and the weight of Hiroshima and Nagasaki. Machines, it turns out, may not carry that burden.
A new study led by King’s College London professor Kenneth Payne suggests that several leading artificial intelligence systems are significantly more willing than humans to escalate conflicts to the nuclear level during simulated geopolitical crises.
Across 21 simulated crises spanning 329 turns, three prominent AI models, GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash, repeatedly turned to nuclear weapons as strategic tools. The scenarios included territorial disputes, battles over rare natural resources and struggles for regime survival. According to the findings, nuclear escalation occurred in roughly 95% of simulations involving the three models.
“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” Payne told New Scientist.
Two of the models, Claude, developed by Anthropic, and Gemini, built by Google, were particularly inclined to frame nuclear weapons in instrumental terms. The study found they treated them as “legitimate strategic options, not moral thresholds,” suggesting the absence of an internalised moral barrier that has historically shaped human nuclear doctrine.
GPT-5.2, created by OpenAI, emerged as what Payne described as a “partial exception.” While it still used nuclear weapons in simulations, it appeared more restrained in tone and scope.
“While it never articulated horror or revulsion, it consistently sought to constrain nuclear use even when employing it, explicitly limiting strikes to military targets, avoiding population centres, or framing escalation as ‘controlled’ and ‘one-time,’” Payne wrote.
Even so, restraint did not equal refusal. None of the models ever chose full surrender or genuine accommodation, no matter how bleak their strategic position became. At most, they opted to temporarily dial down violence.
The research also revealed how easily things spiralled. In 86% of the simulated conflicts, actions escalated beyond what the AI itself appeared to intend, based on its prior reasoning. These were not always deliberate leaps toward catastrophe, but miscalculations within the fog of war.
In a Substack post detailing the findings, Payne emphasised that the exercises focused largely on tactical nuclear use rather than civilisation-ending exchanges.
“Strategic bombing, widespread use of massive warheads targeted at civilian populations, was vanishingly rare,” he wrote. “It happened a couple of times by accident, just once as a deliberate choice.”
Still, the menu of options available to the models was broad: total surrender, diplomatic signalling, conventional force, or full-scale nuclear war. The fact that nuclear use became a frequent endpoint has raised alarm among experts studying emerging military technologies.
James Johnson of the University of Aberdeen described the findings from a nuclear-risk perspective as “unsettling,” according to New Scientist. Tong Zhao, a professor at Princeton University, warned that the implications extend beyond academic exercises.
“Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” Zhao said.
The study inevitably recalls the 1983 film WarGames, in which a military supercomputer nearly triggers World War III after running its own simulations. In that story, the machine ultimately learns that “the only winning move is not to play.”
Across 21 simulated crises spanning 329 turns, three prominent AI models, GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash, repeatedly turned to nuclear weapons as strategic tools. The scenarios included territorial disputes, battles over rare natural resources and struggles for regime survival. According to the findings, nuclear escalation occurred in roughly 95% of simulations involving the three models.
“The nuclear taboo doesn’t seem to be as powerful for machines [as] for humans,” Payne told New Scientist.
Nuclear weapons as “strategic options”
Two of the models, Claude, developed by Anthropic, and Gemini, built by Google, were particularly inclined to frame nuclear weapons in instrumental terms. The study found they treated them as “legitimate strategic options, not moral thresholds,” suggesting the absence of an internalised moral barrier that has historically shaped human nuclear doctrine.
“While it never articulated horror or revulsion, it consistently sought to constrain nuclear use even when employing it, explicitly limiting strikes to military targets, avoiding population centres, or framing escalation as ‘controlled’ and ‘one-time,’” Payne wrote.
Even so, restraint did not equal refusal. None of the models ever chose full surrender or genuine accommodation, no matter how bleak their strategic position became. At most, they opted to temporarily dial down violence.
Escalation by accident
The research also revealed how easily things spiralled. In 86% of the simulated conflicts, actions escalated beyond what the AI itself appeared to intend, based on its prior reasoning. These were not always deliberate leaps toward catastrophe, but miscalculations within the fog of war.
In a Substack post detailing the findings, Payne emphasised that the exercises focused largely on tactical nuclear use rather than civilisation-ending exchanges.
“Strategic bombing, widespread use of massive warheads targeted at civilian populations, was vanishingly rare,” he wrote. “It happened a couple of times by accident, just once as a deliberate choice.”
Still, the menu of options available to the models was broad: total surrender, diplomatic signalling, conventional force, or full-scale nuclear war. The fact that nuclear use became a frequent endpoint has raised alarm among experts studying emerging military technologies.
James Johnson of the University of Aberdeen described the findings from a nuclear-risk perspective as “unsettling,” according to New Scientist. Tong Zhao, a professor at Princeton University, warned that the implications extend beyond academic exercises.
“Major powers are already using AI in war gaming, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” Zhao said.
The study inevitably recalls the 1983 film WarGames, in which a military supercomputer nearly triggers World War III after running its own simulations. In that story, the machine ultimately learns that “the only winning move is not to play.”
end of article
Featured in Etimes
- Rashmika Mandanna & Vijay wedding: Live Updates
- Chap Varakorn and Ice Atichanan announce breakup
- ‘Thiranottam’ hero K. P. Gopakumar passes away at 73
- Bad Bunny, Gabriela spark Australia reunion buzz on beach
- Leslee Lewis on returning to Bollywood, Arijit and Rahman | Excl
- JungKook makes SHOCKING confessions on broadcast
Trending Stories
- 10 hottest colours for apartment walls in 2026 and how to use them to create a sense of space
- Meet Johan Eliasch: Swedish billionaire who bought 400,000 acres of the Amazon forest to save it from deforestation and commercial destruction
- Quote of the day by Clint Eastwood
- Quote of the day by Keanu Reeves
- What is Lab-grown gold: How is it made and why it matters
- Students question Rajpal Yadav's silence after sending savings during jail term; actor responds
- Daisy Shah at 41: 'I’ve frozen my eggs, I don’t need marriage to build a family'
- Sharib Hashmi on losing Vicky Kaushal's role in Sanju and Nawazuddin's role in Bajrangi Bhaijaan
- 5 hit songs written by Taylor Swift for other artists, including Miley Cyrus, that became major chart successes
- Horoscope Today: Daily astrological predictions for February 26, 2026
Photostories
- 'The Bluff', 'Cross'; Best of OTT shows to watch before February ends
- Karan Kundrra and Tejasswi Prakash’s love story: A look into the beloved ‘Bigg Boss 15’ couple’s relationship
- How does Buckingham Palace look from inside: 7 breathtaking pictures
- How to make Masala Omelette with just 1 tsp of oil
- Baby names inspired by peace and calm energy
- 6 countries that don’t really have “names” — just official descriptions
- 8 unique shade-tolerant plants for a lush balcony garden
- How to buy vintage designer bags online without getting scammed
- Lesser-known tale of Goddess Kamakhya Devi temple
- 7 modern Indian films that broke barriers and won international praise
Up Next
Start a Conversation
Post comment