Anthropic says distillation attacks against Claude are increasing and calls for coordinated industry and policy action. In an accompanying post, the company reports campaign-level abuse patterns and outlines technical and operational countermeasures.
#distillation
On February 23, 2026, Anthropic said it detected large-scale distillation abuse tied to roughly 24,000 fraudulent accounts and more than 16 million Claude exchanges. The company framed the issue as both a model security and policy challenge.
Anthropic has accused three Chinese AI companies — DeepSeek, Moonshot AI (Kimi), and MiniMax — of creating over 24,000 fraudulent Claude accounts to extract training data from 16 million conversations, marking a major escalation in AI intellectual property disputes.
Anthropic has accused Chinese AI firms of creating over 24,000 fraudulent accounts to extract 16 million training exchanges from Claude for model distillation.
A Reddit thread amplified an Ars Technica report that Google detected a 100,000+ prompt extraction campaign against Gemini, reopening questions about distillation, defense, and IP boundaries.