NIST to Evaluate Google, Microsoft, xAI AI Models Before Public Release

Read in other languages: 한국어日本語
AI May 9, 2026 By Insights AI 1 min read 1 views Source

A Policy Shift Prompted by Claude Mythos

The Center for AI Standards and Innovation (CASI), housed within the U.S. Department of Commerce, announced on May 5 that it has signed agreements with Google DeepMind, Microsoft, and xAI to evaluate their frontier AI models for national security and public safety risks before public release.

The impetus was Anthropic's restricted cybersecurity model, Claude Mythos, which demonstrated the ability to autonomously identify thousands of zero-day vulnerabilities across every major operating system and web browser. The model's autonomous hacking capabilities alarmed policymakers enough to prompt a rapid review of how government should oversee powerful AI.

An FDA-Style Review Under Consideration

White House economic advisor Kevin Hassett told Fox Business that the administration is "studying possibly an executive order to give a clear road map" for AI safety review, explicitly comparing the framework to FDA drug approval — a significant rhetorical shift from the administration's earlier stance of minimal regulation.

CASI has already completed more than 40 AI model evaluations and will conduct both pre-launch reviews and post-deployment research. Notably, Anthropic — whose Mythos model prompted the policy discussion — is absent from the current agreements, as it has taken its own approach of severely limiting Mythos access.

Sources: CNBC · Washington Post

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment