Anthropic Publishes Department of War Position: Continue Defense Work, Keep Two Safeguards
Original: A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. https://t.co/rM77LJejuk View original →
X post and source statement
On February 26, 2026, Anthropic shared an X post linking to a public statement by CEO Dario Amodei about discussions with the U.S. Department of War. The statement frames Anthropic as supportive of U.S. and allied defense use of AI, and says Claude is already deployed for mission-critical work such as intelligence analysis, modeling and simulation, planning, and cyber operations.
What Anthropic says it will and will not support
The document makes two explicit exclusions. First, Anthropic says it will not support mass domestic surveillance, arguing that AI can aggregate legally purchased but fragmented data into large-scale personal monitoring. Second, it says current frontier models are not reliable enough for fully autonomous weapons and should not be used there without stronger safeguards and oversight.
Anthropic also says it offered to work with the Department of War on reliability R&D for these systems. The company states that the offer was not accepted.
Contract and governance friction
According to the statement, the Department of War has required vendors to accept "any lawful use" and remove safeguards in the two disputed areas. Anthropic says it was warned about potential offboarding and other escalatory measures if it kept those restrictions. The company maintains it cannot accept those terms, while also saying it prefers to continue serving defense customers with the two safeguards intact.
Why this is a high-signal policy update
This is a meaningful AI governance signal because it moves beyond abstract safety language into concrete contract boundaries tied to national-security deployment. For enterprise and public-sector teams, the practical implication is that model-provider risk is no longer just about capability and price; it now includes policy compatibility, procurement terms, and operational continuity when safeguard requirements diverge.
Primary sources: X post, Anthropic statement.
Related Articles
A high-scoring r/artificial post cites Axios reporting that Claude may have been used during a U.S. military operation. The dispute highlights a broader tension between defense usage demands and provider safety constraints.
Axios reports the NSA is using Anthropic's Mythos Preview even as Pentagon officials call the company a supply-chain risk. The clash puts AI safety limits, federal cyber demand, and procurement politics in the same room.
Why it matters: the same model Anthropic framed as too dangerous for public release was reportedly exposed twice in quick succession. The Verge says Mythos was first revealed through an unsecured data trove, then reached by unauthorized users from day one through guessed infrastructure and contractor access.
Comments (0)
No comments yet. Be the first to comment!