Anthropic published Responsible Scaling Policy Version 3.0 on February 24, 2026. The update keeps the ASL framework but retools how commitments are managed when capability thresholds are hard to measure unambiguously.
#anthropic
Anthropic says distillation attacks against Claude are increasing and calls for coordinated industry and policy action. In an accompanying post, the company reports campaign-level abuse patterns and outlines technical and operational countermeasures.
Anthropic's Claude Opus 4.6 independently solved a directed Hamiltonian cycle decomposition problem that computer science legend Donald Knuth had spent weeks working on. Knuth documented the achievement in a formal Stanford paper, marking one of the first times a top-tier computer scientist has formally credited an LLM with solving a genuine research problem.
Anthropic's Claude iOS app shot to number one on the US App Store as users switched from ChatGPT in solidarity with Anthropic's decision to refuse Pentagon military surveillance requests.
Anthropic announced it will legally challenge the Pentagon's supply chain risk designation, issued after the company refused to assist with military surveillance programs. It's an unprecedented move for an AI company against the federal government.
The US Treasury Department announced it will terminate all use of Anthropic AI products following Trump's executive order designating Anthropic as a supply chain risk after the company refused military surveillance assistance.
Anthropic launched Claude Cowork plugins that embed Claude natively into Microsoft Excel, PowerPoint, Slack, Gmail, and Google Drive—enabling autonomous cross-app workflows for enterprise users.
After Trump ordered federal agencies to stop using Anthropic AI, the Pentagon designated the firm a national security supply chain risk—and OpenAI secured a competing Defense Department agreement within hours.
Anthropic has acquired Seattle-based AI startup Vercept to enhance Claude's computer use capabilities, folding the startup's desktop control technology and team directly into Claude development.
Anthropic's Claude Code Cowork (multi-agent collaboration) feature was found to create a ~10GB VM bundle on macOS using Apple's Virtualization Framework without warning users. The GitHub issue garnered 200+ points on Hacker News.
Following Anthropic's refusal to cooperate with the Pentagon and its announcement of legal action, a wave of ChatGPT users switched to Claude, pushing the Claude app to No. 1 on the US App Store.
After the Trump administration ordered federal agencies to immediately stop using Anthropic AI and the Pentagon designated Anthropic as a 'supply chain risk', Anthropic announced it will fight the designation in court. Meanwhile, OpenAI struck a deal with the Pentagon.