NIST said on February 17, 2026 that its Center for AI Standards and Innovation is launching the AI Agent Standards Initiative. The effort focuses on technical standards, open protocols, and research on agent security and identity to support broader adoption of autonomous AI systems.
#standards
RSS FeedNIST released AI 800-4, a March 2026 report arguing that post-deployment monitoring is now a core requirement as AI systems move into commercial and government use. The paper organizes current practice and open questions around monitoring, from unforeseen outputs and drift to incident tracking and broader real-world effects.
NIST says AI 800-3 gives evaluators a clearer statistical framework by separating benchmark accuracy from generalized accuracy and by introducing generalized linear mixed models for uncertainty estimation. The February 19, 2026 report argues that many current benchmark comparisons hide assumptions that can distort procurement, development, and policy decisions.
NIST on March 9, 2026 published NIST AI 800-4, a report on the challenges of monitoring deployed AI systems. It organizes post-deployment AI oversight into six categories spanning functionality, operations, human factors, security, compliance, and large-scale impacts.
NIST’s CAISI released draft guidance NIST AI 800-2 for automated language-model benchmark evaluations and opened comments through March 31, 2026. The draft focuses on objective setting, execution methodology, and analysis/reporting quality.