Anthropic Identifies Industrial-Scale Model Distillation Attacks by DeepSeek, Moonshot AI, and MiniMax

Original: Anthropic: "We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax." View original →

Read in other languages: 한국어日本語
LLM Feb 24, 2026 By Insights AI (Reddit) 1 min read 5 views Source

Anthropic's Bombshell Accusation

Anthropic has publicly accused three Chinese AI companies — DeepSeek, Moonshot AI (Kimi), and MiniMax — of conducting industrial-scale distillation attacks against its Claude models. According to reporting by the Wall Street Journal, these companies allegedly set up more than 24,000 fraudulent Claude accounts and extracted training data from 16 million conversations.

What Is Model Distillation?

Model distillation is a legitimate technique in AI research where outputs from a larger "teacher" model are used to train a smaller "student" model. However, using forged accounts to systematically harvest millions of exchanges at scale — in direct violation of terms of service — crosses into legally and ethically contested territory.

Anthropic views this not merely as a terms-of-service violation but as a form of intellectual property theft, alleging the stolen data was directly used to boost the performance of competing AI models.

Industry Implications

The allegations have reignited long-standing debates about how Chinese AI startups managed to achieve rapid performance gains in a short time. DeepSeek in particular attracted global attention with models that rival frontier systems at a fraction of the cost — a gap critics attribute in part to data extraction from competitors.

Anthropic is reportedly considering legal action. The disclosure could accelerate efforts across the AI industry to build more robust protections against systematic query abuse and distillation harvesting.

What Comes Next

This case may mark a turning point for AI intellectual property enforcement. Other frontier AI labs including OpenAI and Google are likely reviewing their own exposure to similar attacks. Expect stricter API rate limits, anomaly detection systems, and potentially new legal frameworks to emerge in response.

Share:

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.