NIST to Evaluate Google, Microsoft, xAI AI Models Before Public Release
A Policy Shift Prompted by Claude Mythos
The Center for AI Standards and Innovation (CASI), housed within the U.S. Department of Commerce, announced on May 5 that it has signed agreements with Google DeepMind, Microsoft, and xAI to evaluate their frontier AI models for national security and public safety risks before public release.
The impetus was Anthropic's restricted cybersecurity model, Claude Mythos, which demonstrated the ability to autonomously identify thousands of zero-day vulnerabilities across every major operating system and web browser. The model's autonomous hacking capabilities alarmed policymakers enough to prompt a rapid review of how government should oversee powerful AI.
An FDA-Style Review Under Consideration
White House economic advisor Kevin Hassett told Fox Business that the administration is "studying possibly an executive order to give a clear road map" for AI safety review, explicitly comparing the framework to FDA drug approval — a significant rhetorical shift from the administration's earlier stance of minimal regulation.
CASI has already completed more than 40 AI model evaluations and will conduct both pre-launch reviews and post-deployment research. Notably, Anthropic — whose Mythos model prompted the policy discussion — is absent from the current agreements, as it has taken its own approach of severely limiting Mythos access.
Sources: CNBC · Washington Post
Related Articles
The U.S. Department of Defense finalized AI deployment agreements with OpenAI, Google, Microsoft, AWS, NVIDIA, SpaceX, Reflection AI, and Oracle for its most classified networks. Anthropic was excluded after refusing to allow Claude to be used for purposes including autonomous weapons and mass surveillance.
Google has signed a classified AI agreement with the Pentagon allowing use of Gemini for any lawful military purpose. The deal came after Anthropic refused similar terms. Over 600 Google employees sent a letter opposing the contract.
The U.S. Department of Defense struck agreements with seven tech companies to deploy AI on its highest-security networks on May 1. Anthropic, which insisted on safety guardrails against autonomous weapons, is conspicuously absent.
Comments (0)
No comments yet. Be the first to comment!