OpenAI Details Department of War Agreement With Cloud-Only Deployment and Explicit Surveillance Limits
Original: Our agreement with the Department of War View original →
What OpenAI announced
OpenAI announced on February 28, 2026 that it had reached an agreement with the Department of War to deploy advanced AI systems in classified environments. In a March 2, 2026 update, OpenAI said both parties added explicit language clarifying that the system will not be intentionally used for domestic surveillance of U.S. persons, including through commercially acquired personal or identifiable information.
OpenAI also stated that services under this agreement will not be used by Department of War intelligence agencies such as the NSA, and that any such use would require a separate agreement.
Core red lines and deployment constraints
OpenAI frames the deal around three red lines: no mass domestic surveillance, no directing autonomous weapons systems, and no high-stakes automated decision-making (for example, social-credit-type systems). It argues these are protected through a layered enforcement model rather than policy text alone.
Technically, OpenAI says the deployment is cloud-only and not edge-deployed, with OpenAI retaining control of the safety stack. The company also says cleared OpenAI engineers and safety/alignment researchers will remain in the loop during deployment and operation.
Contract language and oversight structure
The published excerpts reference legal and policy constraints, including the Fourth Amendment, National Security Act of 1947, FISA Act of 1978, and DoD Directive 3000.09 dated January 25, 2023. OpenAI says the contract language requires human control where law, regulation, or Department policy requires it, and specifies verification, validation, and testing requirements for autonomous and semi-autonomous systems before deployment.
OpenAI further says it could terminate the contract if terms are violated. The post also says the Department of War plans to convene a working group including frontier AI labs, cloud providers, and policy/operational leaders, with OpenAI participating.
Why this is high impact
This is a high-signal policy and infrastructure story because it centers on deployment governance, not just model capability. The practical constraints in this type of agreement are architectural and operational: where the model runs, who controls safeguards, and how human oversight is embedded. Those decisions determine whether stated red lines are auditable and enforceable in real-world national-security usage.
For enterprise and public-sector observers, the broader implication is that AI procurement standards are likely to move toward detailed deployment terms, legal cross-references, and explicit control-layer obligations rather than relying on general acceptable-use language.
Source: OpenAI statement
Related Articles
Anthropic has launched The Anthropic Institute, a new public-interest effort focused on the social challenges posed by powerful AI. The company says the group will combine technical, economic, and social-science expertise to inform the broader public conversation.
Anthropic says a March 4 Department of War letter designates it as a supply chain risk, but argues the scope is narrow and will challenge the action in court.
After the Trump administration ordered federal agencies to immediately stop using Anthropic AI and the Pentagon designated Anthropic as a 'supply chain risk', Anthropic announced it will fight the designation in court. Meanwhile, OpenAI struck a deal with the Pentagon.
Comments (0)
No comments yet. Be the first to comment!