Anthropic says it will challenge a proposed Department of War supply-chain designation over surveillance and autonomous-weapon limits
Original: Statement on the comments from Secretary of War Pete Hegseth View original →
Anthropic on Feb 27, 2026 published a formal response to comments from Secretary of War Pete Hegseth after he said the Department of War was being directed to designate Anthropic a supply chain risk. According to Anthropic, the dispute followed months of negotiations that stalled over two requested exceptions to otherwise lawful national-security uses of Claude: mass domestic surveillance of Americans and fully autonomous weapons.
Anthropic says it continues to support all other lawful uses of AI for national security and argues that the two exceptions are narrow. The company says current frontier models are not reliable enough to be used in fully autonomous weapons and that mass domestic surveillance would violate fundamental rights. It also notes that it has supported American warfighters since June 2024 and wants to continue working with government users.
Immediate customer impact
The statement is unusually specific about operational consequences. Anthropic says individual customers and commercial customers would remain unaffected. It also says that even if a formal supply chain risk designation were adopted, it could only apply to the use of Claude in Department of War contract work and would not legally extend to contractors' other business. The company says its sales and support teams are available to answer customer questions during the dispute.
Anthropic goes further and says it would challenge any designation in court. That makes the announcement notable beyond one procurement fight: it shows how AI-governance disagreements are increasingly being expressed through contract terms, infrastructure access, and purchasing rules rather than only through high-level policy speeches. For companies building frontier models, the boundary between product policy and government procurement is becoming much thinner.
The episode also shows that national-security adoption of AI is no longer a simple yes-or-no question. Providers, customers, and governments are now negotiating detailed use restrictions, reliability thresholds, and rights boundaries in public. That makes these disputes consequential for the wider AI market, even for organizations that do not sell directly into defense.
Related Articles
OpenAI published details of its Department of War agreement on February 28, 2026 and added a clarifying update on March 2. The company says the deal is cloud-only, keeps humans in the loop, forbids domestic surveillance of U.S. persons, and bars autonomous-weapons direction and other high-stakes automated decisions.
Anthropic published a Frontier Safety Roadmap that outlines dated goals across security, safeguards, alignment, and policy. The document pairs current ASL-3 protections with milestone targets through 2027, including policy proposals and expanded internal oversight.
In a February 26 statement, Anthropic said it will keep supporting U.S. defense and intelligence deployments but refuses two uses: mass domestic surveillance and fully autonomous weapons.
Comments (0)
No comments yet. Be the first to comment!