Pentagon Summons Anthropic CEO Over Military Use of Claude, Threatens Supply Chain Risk Label
Pentagon vs. Anthropic: A Constitutional Moment for AI Ethics
Tensions between the U.S. Department of Defense and AI company Anthropic reached a boiling point on February 23, 2026, when Defense Secretary Pete Hegseth officially summoned Anthropic CEO Dario Amodei to the Pentagon.
The Core Dispute
The confrontation stems from Anthropic's refusal to allow its Claude AI to be used for mass surveillance of Americans or fully autonomous weapons systems—those that fire without human involvement. In response, the DoD threatened to designate Anthropic as a "supply chain risk," a label typically reserved for foreign adversaries.
What's at Stake
Anthropic holds a $200 million contract with the Department of Defense. A supply chain risk designation would void that contract and compel other Pentagon partners to discontinue use of Claude entirely. This represents not just a financial blow but a potential precedent for how the U.S. government handles AI companies that resist certain military applications.
The Meeting's Tone
A senior defense official described the meeting bluntly: "This is not a get-to-know-you meeting... This is a sh*t-or-get-off-the-pot meeting." The ultimatum framing signals the DoD is prepared to escalate if Anthropic does not comply.
Broader Implications
This clash marks a significant moment in AI governance—illustrating the conflict between AI companies' ethical guardrails and the expanding role of AI in national security. As major AI labs deepen involvement in defense, the question of where to draw ethical limits in AI development grows increasingly contentious.
Source: TechCrunch — Defense Secretary summons Anthropic's Amodei over military use of Claude
Related Articles
After Trump ordered federal agencies to stop using Anthropic AI, the Pentagon designated the firm a national security supply chain risk—and OpenAI secured a competing Defense Department agreement within hours.
Anthropic updated its Responsible Scaling Policy page on April 2, 2026 and moved the policy to version 3.1. The company says the revision mostly clarifies its AI R&D threshold language and makes explicit that it can pause development even when the RSP does not strictly require it.
Axios reports the NSA is using Anthropic's Mythos Preview even as Pentagon officials call the company a supply-chain risk. The clash puts AI safety limits, federal cyber demand, and procurement politics in the same room.
Comments (0)
No comments yet. Be the first to comment!