Anthropic said on March 5, 2026 that it had received a supply-chain risk designation letter from the Department of War. The company says the scope is narrow, plans to challenge the action in court, and will continue transition support for national-security users.
#policy
Anthropic published a Frontier Safety Roadmap that outlines dated goals across security, safeguards, alignment, and policy. The document pairs current ASL-3 protections with milestone targets through 2027, including policy proposals and expanded internal oversight.
Anthropic published Responsible Scaling Policy Version 3.0 on February 24, 2026. The update keeps the ASL framework but retools how commitments are managed when capability thresholds are hard to measure unambiguously.
Anthropic says distillation attacks against Claude are increasing and calls for coordinated industry and policy action. In an accompanying post, the company reports campaign-level abuse patterns and outlines technical and operational countermeasures.
OpenAI’s February 2026 safety report says it banned accounts linked to seven operations originating in China. The company says abuse covered cyber activity, covert influence, and scams, while overall malicious use remained low versus legitimate use.
Sam Altman announced OpenAI reached an agreement with the U.S. Department of War to deploy AI models on classified networks, with core safety principles including bans on domestic mass surveillance and autonomous weapon systems.
Anthropic has officially rejected the Pentagon's latest proposal, stating 'We cannot in good conscience accede to their request.' The move underscores Anthropic's position on AI safety principles and the tension between powerful AI capabilities and military applications.
OpenAI CEO Sam Altman announced a Pentagon deal to deploy AI models in classified networks just hours after Anthropic was blacklisted by the Trump administration. The agreement explicitly includes prohibitions on mass domestic surveillance and autonomous weapons.
OpenAI CEO Sam Altman announced a Pentagon deal to deploy AI models in classified networks just hours after Anthropic was blacklisted by the Trump administration. The agreement explicitly includes prohibitions on mass domestic surveillance and autonomous weapons.
Anthropic announced Responsible Scaling Policy v3 on February 24, 2026 and paired it with a Frontier Safety Roadmap. The company says it will update the policy every 3-6 months and publish model-specific Risk Reports to improve verifiability.
OpenAI said on February 28, 2026 that it reached an agreement with the U.S. Department of War to deploy advanced AI systems in classified environments. In a follow-up post, the company said the arrangement uses a multi-layer safety approach and cloud-based deployment with cleared personnel in the loop.
Anthropic released Responsible Scaling Policy 3.0, adding a structured Frontier Safety and Security Framework and new roadmap and reporting mechanisms. The update emphasizes explicit commitments to pause or withhold deployment if risk thresholds are exceeded.