Anthropic tells appeals court Claude can't be steered inside Pentagon networks

Original: Anthropic seeks to debunk Pentagon's claims about its control over AI technology in military systems View original →

Read in other languages: 한국어日本語
AI Apr 24, 2026 By Insights AI 2 min read 1 views Source

Anthropic's fight with the Pentagon is turning into a test of what control means once a model leaves the lab. In an April 22 AP report, the company said a 96-page filing to the U.S. Court of Appeals in Washington argues that Anthropic cannot manipulate Claude after it is deployed inside classified Pentagon military networks. That claim goes directly at the Trump administration's effort to treat Anthropic as a supply-chain risk.

The dispute is not abstract policy theater. AP says it grows out of a contract fight over how AI can be used in fully autonomous weapons and over potential surveillance of Americans. Anthropic argues the Pentagon is retaliating illegally by attaching a stigma designed to guard national-security systems against sabotage by foreign adversaries. The company wants the court to see a difference between supplying a model and retaining operational control over how it runs once it is installed in a classified environment.

The procedural posture matters. Earlier in April, the appeals court rejected Anthropic's request for an order that would have blocked the Pentagon's actions while the case moves forward. The new filing is meant to answer the court's questions before oral arguments on May 19. AP also notes that Anthropic had a better result in a parallel San Francisco federal case on the same issues, and court filings say that earlier decision led the government to remove the stigmatizing labels there.

The business stakes are already visible. AP reports that the Pentagon canceled a $200 million contract with Anthropic after the dispute, and OpenAI later struck a deal to provide technology to the U.S. military. That turns a legal argument about model control into a procurement question with immediate revenue and strategic consequences.

What happens next could shape more than this case. If courts accept Anthropic's argument, frontier AI providers may be able to limit how much responsibility they carry once their models are embedded in government-controlled networks. If they reject it, deployment into classified environments may come with heavier assumptions about ongoing vendor control and liability.

Share: Long

Related Articles

AI sources.twitter Apr 1, 2026 2 min read

Anthropic said on March 31, 2026 that it signed an MOU with the Australian government to collaborate on AI safety research and support Australia’s National AI Plan. Anthropic says the agreement includes work with Australia’s AI Safety Institute, Economic Index data sharing, and AUD$3 million in partnerships with Australian research institutions.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.