NIST's CAISI Signs Pre-Deployment AI Safety Agreements With Google DeepMind, Microsoft, and xAI

Read in other languages: 한국어日本語
AI May 6, 2026 By Insights AI 1 min read 1 views Source

Pre-Deployment Evaluations Are Now the Baseline

The Center for AI Standards and Innovation (CAISI) — the renamed US AI Safety Institute at NIST — announced on May 5, 2026 that it signed national security testing agreements with Google DeepMind, Microsoft, and xAI. OpenAI and Anthropic renegotiated their existing evaluation partnerships to align with the Trump administration's AI Action Plan priorities.

Under the agreements, CAISI will conduct pre-deployment evaluations on frontier AI models, post-deployment assessments, and targeted research on AI security risks. Models will be tested in classified environments; developers frequently provide versions with reduced safeguards to enable thorough national security evaluation.

Scope: Cyber, Bio, Chemical Weapons

CAISI focuses on "demonstrable risks" — cybersecurity, biosecurity, and chemical weapons. This is a more targeted mandate than the Biden-era AISI. Commerce Secretary Howard Lutnick has designated CAISI as the primary US government contact for AI industry testing and best practice development.

40+ Evaluations Complete, Coverage Expanding

CAISI has completed more than 40 model evaluations to date, including unreleased models. The new agreements with Google, Microsoft, and xAI significantly expand coverage across the world's most capable frontier AI programs. Microsoft's partnership also involves coordination with the UK AI Security Institute.

Source: NIST, SiliconAngle

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment