Anthropic accuses DeepSeek MiniMax Moonshot AI of Claude distillation
24,000 accounts and 16 million exchanges cited, API access turns model output into an extractable resource
Images
Anthropic says it has detected what it calls “industrial‑scale” efforts by three major Chinese AI labs—DeepSeek, MiniMax and Moonshot AI—to use its Claude model illicitly as a training resource.
According to Business Insider, Anthropic alleges the firms created roughly 24,000 fraudulent Claude accounts that generated more than 16 million exchanges, in violation of terms of service and regional access restrictions. The company says the activity was aimed at “distillation”: training a smaller or cheaper model on the outputs of a more capable one.
Distillation itself is not controversial inside AI engineering; it is one of the standard ways to compress capabilities into deployable models. The dispute is about permission and enforceability. If a model is offered through an API, its output becomes an extractable commodity. A rival does not need your weights, your data pipeline, or your compute budget—only enough access to query the system at scale and capture the responses. Anthropic claims MiniMax shifted tactics rapidly when a new Claude model was released, redirecting nearly half its traffic within 24 hours to “capture capabilities from our latest system,” suggesting an operation organised around release cycles rather than ad‑hoc experimentation.
That dynamic pushes providers toward measures that look less like product design and more like gatekeeping. Business Insider reports Anthropic has built “behavioral fingerprinting systems,” shares indicators with other AI firms, and is developing further countermeasures. Each of those steps raises the cost of anonymous access and increases the value of identity‑linked accounts and monitoring. The more valuable the output, the more rational it becomes to treat users as potential extractors.
Anthropic is also using the allegation to argue for policy that constrains competitors’ ability to scale. The company said the campaigns underscore the case for US export controls on advanced AI chips, Business Insider reports—an argument that has split the industry, with Nvidia chief Jensen Huang among those warning that restrictions will not stop China’s progress.
The immediate stakes are commercial. The longer‑term stakes are institutional: “closed” models offered as services behave like leaky reservoirs. If extraction is cheap and hard to prove, vendors either accept leakage as a cost of doing business—or they tighten access until the product resembles a licensed utility.
Anthropic says it identified 24,000 accounts and 16 million exchanges. That is a lot of output for something that is supposed to be a chat interface, not a training feed.