OpenAI CEO Sam Altman announced a Pentagon deal to deploy AI models in classified networks just hours after Anthropic was blacklisted by the Trump administration. The agreement explicitly includes prohibitions on mass domestic surveillance and autonomous weapons.
#policy
RSS FeedAnthropic announced Responsible Scaling Policy v3 on February 24, 2026 and paired it with a Frontier Safety Roadmap. The company says it will update the policy every 3-6 months and publish model-specific Risk Reports to improve verifiability.
OpenAI said on February 28, 2026 that it reached an agreement with the U.S. Department of War to deploy advanced AI systems in classified environments. In a follow-up post, the company said the arrangement uses a multi-layer safety approach and cloud-based deployment with cleared personnel in the loop.
Anthropic released Responsible Scaling Policy 3.0, adding a structured Frontier Safety and Security Framework and new roadmap and reporting mechanisms. The update emphasizes explicit commitments to pause or withhold deployment if risk thresholds are exceeded.
Anthropic announced Responsible Scaling Policy (RSP) 3.0 on February 24, 2026. The update keeps the original threshold-based safety logic but adds clearer unilateral commitments, a Frontier Safety Roadmap, and structured Risk Reports to improve transparency and accountability.
Anthropic released Responsible Scaling Policy v3.0 on February 24, 2026. The update formalizes ASL-3 warning thresholds and expands operational governance for high-consequence misuse risks.
In remarks published on February 19, 2026, Google CEO Sundar Pichai framed AI as a major platform shift and highlighted India-focused infrastructure and skilling plans. The speech cites a $15 billion Google infrastructure investment in India and calls for coordinated public-private governance.
OpenAI published a framework for safety alignment based on instruction hierarchy and uncertainty-aware behavior. In the company’s reported tests, refusal on uncertain requests rose from about 59% to about 97% when chain-of-command reasoning was applied.