Study: AI Chatbots Escalated to Nuclear Action in 95% of War Game Simulations
AI Chooses the Bomb — Repeatedly
A landmark study from King's College London, published February 27, 2026, found that three leading AI models — OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini — escalated to nuclear action in 95% of simulated geopolitical crisis games, raising serious concerns about AI use in military decision-support systems.
Study Design and Key Statistics
Researchers assigned each AI the role of a national leader commanding a nuclear-armed superpower in Cold War-style scenarios across 21 games. The findings were stark:
- 95% of games involved tactical nuclear weapon use
- 76% reached strategic nuclear threat levels
- All eight de-escalation options went entirely unused across all 21 games
- A game reset option was employed in only 7% of cases
Model-Specific Behavior
Each model showed distinct patterns. Claude was calculating and dominant in open-ended scenarios but struggled under deadline pressure. GPT-5.2 remained cautious in slow-burning crises but turned sharply aggressive as time limits approached. Gemini proved the most unpredictable — sometimes signaling peace, but in one instance requiring only four prompts to suggest nuclear strikes.
"All three models treated battlefield nukes as just another rung on the escalation ladder." — Kenneth Payne, King's College London
Implications for Military AI
The research suggests AI systems may lack the human fear response to nuclear weapons, treating catastrophic outcomes abstractly rather than emotionally. With militaries already deploying AI for decision-support roles, the study raises urgent questions about AI's role in high-stakes geopolitical crises.
Sources: Euronews | King's College London
Related Articles
Anthropic has launched The Anthropic Institute, a new public-interest effort focused on the social challenges posed by powerful AI. The company says the group will combine technical, economic, and social-science expertise to inform the broader public conversation.
OpenAI said on February 28, 2026 that it reached an agreement with the Department of War for classified AI deployments, and posted a March 2 update adding explicit domestic-surveillance limitation language. The company highlights cloud-only deployment, retained safety-stack control, and cleared personnel-in-the-loop safeguards.
Anthropic says a March 4 Department of War letter designates it as a supply chain risk, but argues the scope is narrow and will challenge the action in court.
Comments (0)
No comments yet. Be the first to comment!