On February 23, 2026, Anthropic said it detected large-scale distillation abuse tied to roughly 24,000 fraudulent accounts and more than 16 million Claude exchanges. The company framed the issue as both a model security and policy challenge.
#anthropic
RSS FeedAnthropic announced Responsible Scaling Policy (RSP) 3.0 on February 24, 2026. The update keeps the original threshold-based safety logic but adds clearer unilateral commitments, a Frontier Safety Roadmap, and structured Risk Reports to improve transparency and accountability.
Anthropic released Responsible Scaling Policy v3.0 on February 24, 2026. The update formalizes ASL-3 warning thresholds and expands operational governance for high-consequence misuse risks.
Anthropic analyzed millions of real Claude interactions and found the 99.9th percentile session duration nearly doubled to 45+ minutes in 3 months, with software engineering accounting for nearly half of all agentic use.
Anthropic published a new theory explaining why AI assistants like Claude express emotions and use anthropomorphic language—proposing that models select from personas inherited from fictional characters during training.
Anthropic revealed that Chinese AI labs DeepSeek, Moonshot AI, and MiniMax created over 24,000 fraudulent accounts and generated 16+ million Claude exchanges to extract its capabilities and improve their own competing models.
Anthropic analyzed millions of real Claude interactions and found the 99.9th percentile session duration nearly doubled to 45+ minutes in 3 months, with software engineering accounting for nearly half of all agentic use.
Anthropic published a new theory explaining why AI assistants like Claude express emotions and use anthropomorphic language—proposing that models select from personas inherited from fictional characters during training.
Anthropic revealed that Chinese AI labs DeepSeek, Moonshot AI, and MiniMax created over 24,000 fraudulent accounts and generated 16+ million Claude exchanges to extract its capabilities and improve their own competing models.
Anthropic analyzed millions of real Claude interactions and found the 99.9th percentile session duration nearly doubled to 45+ minutes in 3 months, with software engineering accounting for nearly half of all agentic use.
Anthropic published a new theory explaining why AI assistants like Claude express emotions and use anthropomorphic language—proposing that models select from personas inherited from fictional characters during training.
Anthropic revealed that Chinese AI labs DeepSeek, Moonshot AI, and MiniMax created over 24,000 fraudulent accounts and generated 16+ million Claude exchanges to extract its capabilities and improve their own competing models.