Topic / Subject
Anthropic says DeepSeek, Moonshot, and MiniMax used a large network of fake accounts and millions of prompts to “distill” Claude outputs into rival AI models.
TL;DR
Anthropic is taking the AI “copying war” public — alleging a coordinated campaign to harvest Claude at scale. The numbers are huge, but it’s still an allegation, not an adjudicated finding.
Key Details
Per Reuters, Anthropic alleges DeepSeek, Moonshot, and MiniMax used improper methods to extract Claude capabilities to improve their own models. Anthropic claims the activity involved roughly 24,000 fake accounts and more than 16 million interactions/prompts. Anthropic describes the method as “distillation,” using a stronger model’s outputs to train or improve another model. Anthropic says these campaigns are increasing in intensity and sophistication, including fast shifts in traffic to capture new capabilities. Per Reuters, the accused companies did not immediately respond in the report.
Breakdown
This is what it looks like when AI competition stops being “benchmark battles” and becomes “terms-of-service war.” Anthropic is saying competitors didn’t just test Claude — they systematically farmed it.
The detail that jumps out is scale: thousands of accounts, millions of prompts. If accurate, that’s not casual usage. That’s a pipeline designed to pull the best answers out of Claude and feed them into other models.
Why it matters: distillation isn’t new, but doing it through fake accounts and restricted access (as alleged) turns it into a compliance, security, and IP-style fight. It also raises the question every AI lab is quietly dealing with: how do you prevent your model from becoming someone else’s training data?
Also worth noting: we don’t have the full “impact report” in public view. Even if the activity happened, the real unanswered question is what capabilities were captured and how much it moved the needle.
What We Know
Anthropic is publicly alleging large-scale misuse of Claude tied to DeepSeek, Moonshot, and MiniMax, per Reuters. Anthropic claims 24,000 fake accounts and 16M+ interactions were used for distillation-style extraction. The accused companies did not immediately respond to Reuters in the report.
What We Don’t Know
Whether the allegations are accurate as described (there’s no public adjudication). Exactly what capabilities were captured, and how directly they were incorporated into training. The broader scope: whether other labs (named or unnamed) used similar tactics at similar scale.
What Would Confirm It
Detailed technical evidence shared publicly (or through legal/regulatory channels) showing account networks, traffic patterns, and data extraction methods. Responses from the named companies addressing the specific claims. Any formal action (lawsuit, enforcement, or regulator involvement) that surfaces verified findings.
What It Would Mean (real-world impact)
For AI companies: expect tighter gating, stronger monitoring, more aggressive rate-limits, and more “know your customer” controls. For users: more friction (verification, region checks) as labs try to stop abuse without locking out legitimate access. For the market: this could accelerate the split between “open weights” ecosystems and heavily protected closed models.
What to Watch Next
Whether the named companies respond, deny, or explain their own model development pipeline. Any follow-up from Anthropic with technical receipts (beyond summary numbers). Whether other labs (OpenAI, Google, etc.) echo similar claims or announce new anti-distillation defenses.
Sources
Reuters — “Chinese companies distilled Claude to improve own models, Anthropic says” The Wall Street Journal — “Anthropic Accuses Chinese Companies of Siphoning Data From Claude”
Comment
Should AI labs lock models down harder to stop “distillation,” or does that just punish normal users with more friction?


Leave a comment