Anthropic Identifies Industrial-Scale Model Distillation Attacks by DeepSeek, Moonshot AI, and MiniMax
Original: Anthropic: "We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax." View original →
Anthropic's Bombshell Accusation
Anthropic has publicly accused three Chinese AI companies — DeepSeek, Moonshot AI (Kimi), and MiniMax — of conducting industrial-scale distillation attacks against its Claude models. According to reporting by the Wall Street Journal, these companies allegedly set up more than 24,000 fraudulent Claude accounts and extracted training data from 16 million conversations.
What Is Model Distillation?
Model distillation is a legitimate technique in AI research where outputs from a larger "teacher" model are used to train a smaller "student" model. However, using forged accounts to systematically harvest millions of exchanges at scale — in direct violation of terms of service — crosses into legally and ethically contested territory.
Anthropic views this not merely as a terms-of-service violation but as a form of intellectual property theft, alleging the stolen data was directly used to boost the performance of competing AI models.
Industry Implications
The allegations have reignited long-standing debates about how Chinese AI startups managed to achieve rapid performance gains in a short time. DeepSeek in particular attracted global attention with models that rival frontier systems at a fraction of the cost — a gap critics attribute in part to data extraction from competitors.
Anthropic is reportedly considering legal action. The disclosure could accelerate efforts across the AI industry to build more robust protections against systematic query abuse and distillation harvesting.
What Comes Next
This case may mark a turning point for AI intellectual property enforcement. Other frontier AI labs including OpenAI and Google are likely reviewing their own exposure to similar attacks. Expect stricter API rate limits, anomaly detection systems, and potentially new legal frameworks to emerge in response.
Related Articles
Anthropic introduced Claude Sonnet 4.6 on February 17, 2026, adding a beta 1M token context window while keeping API pricing at $3/$15 per million tokens. The company says the new default model improves coding, computer use, and long-context reasoning enough to cover more work that previously pushed users toward Opus-class models.
A Reddit thread amplified an Ars Technica report that Google detected a 100,000+ prompt extraction campaign against Gemini, reopening questions about distillation, defense, and IP boundaries.
Anthropic introduced Claude Sonnet 4.6 with a 1M token context window (beta), stronger coding/computer-use performance, and unchanged API pricing at $3/$15 per million tokens.
Comments (0)
No comments yet. Be the first to comment!