Anthropic Accuses DeepSeek and Chinese AI Firms of Stealing 16M Claude Training Exchanges
Original: Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges. View original →
Industrial-Scale AI Data Theft Allegations
Anthropic has leveled explosive accusations against three Chinese AI companies: DeepSeek, Moonshot AI (Kimi), and MiniMax. According to a Wall Street Journal report, the companies allegedly set up more than 24,000 fraudulent Claude accounts and used them to harvest training data from approximately 16 million conversation exchanges.
What Is Distillation and Why It Matters
Distillation in AI refers to the process of using outputs from a large language model (LLM) to train smaller or competing models. Anthropic's terms of service explicitly prohibit using Claude's outputs to train competing AI systems. When done at this scale without authorization, it constitutes both a terms-of-service violation and a potential intellectual property infringement.
The Scope of the Alleged Operation
What sets this case apart is its scale. Creating over 24,000 fake accounts and systematically harvesting 16 million conversations suggests a coordinated, industrial-scale operation rather than casual experimentation. Anthropic described this as an "industrial-scale distillation attack" in a company blog post, noting they had identified and blocked the operation.
Context: China's AI Surge
DeepSeek made waves in late 2024 and early 2025 by claiming GPT-4-level performance at a fraction of the cost. These allegations raise questions about whether some of that performance was achieved through unauthorized data acquisition. Moonshot AI (Kimi) and MiniMax are also fast-growing Chinese AI startups that have rapidly closed the gap with Western models.
Implications for the AI Industry
This case could set important precedents around AI output ownership, the legality of distillation from commercial services, and competitive practices in the global AI race. Anthropic has published evidence of the attack and is calling for industry-wide attention to model output protection. The outcome may shape future policies on AI data rights and cross-border IP enforcement.
Related Articles
Anthropic says distillation attacks against Claude are increasing and calls for coordinated industry and policy action. In an accompanying post, the company reports campaign-level abuse patterns and outlines technical and operational countermeasures.
Anthropic said on March 5, 2026 that it had received a supply-chain risk designation letter from the Department of War. The company says the scope is narrow, plans to challenge the action in court, and will continue transition support for national-security users.
The Anthropic-Mozilla collaboration that spread on Hacker News disclosed that Claude Opus 4.6 found 22 Firefox vulnerabilities, 14 of them high-severity. The durable lesson is not autonomous magic but faster defender workflows built around validation, triage, and reproducible evidence.
Comments (0)
No comments yet. Be the first to comment!