The DoD cleared OpenAI, Google, Microsoft, AWS, Oracle, Nvidia, and SpaceX to deploy AI on classified Impact Level 6 and IL7 military networks. Anthropic was labeled a 'supply chain risk' after insisting on safety guardrails for wartime AI use.
AI Safety Governance Watch: CAISI Agreements and Pentagon AI Contracts
Two pillars of U.S. AI safety governance in May 2026: the Pentagon signs classified AI deals with 7 tech giants excluding Anthropic, while NIST's CAISI secures pre-deployment safety evaluation agreements with Google, Microsoft, and xAI — later expanded as Claude Mythos enters the picture.
The Center for AI Standards and Innovation (CAISI) announced on May 5 that it signed national security testing agreements with Google DeepMind, Microsoft, and xAI, expanding pre-deployment frontier AI evaluations focused on cybersecurity, biosecurity, and chemical weapons risks.
The Center for AI Standards and Innovation (CASI) secured agreements with Google DeepMind, Microsoft, and xAI to review frontier AI models for national security risks before launch. The policy shift follows alarm over Anthropic's Claude Mythos autonomous cybersecurity capabilities.