Anthropic, an artificial intelligence startup, has accused three major Chinese AI companies—DeepSeek, MiniMax, and Moonshot AI—of unauthorized use of its AI system Claude to train their own models. The company alleges that these firms conducted large-scale campaigns involving approximately 24,000 fraudulent Claude accounts which collectively generated over 16 million exchanges, violating Anthropic’s terms of service and geographical access restrictions.
The process involved is known as “distillation,” a technique where a less powerful AI model is trained on the outputs of a more advanced one. While distillation is a common and accepted method within the AI industry, Anthropic claims their Chinese competitors engaged in illicit use to gain unfair advantages in the competitive global AI landscape.
Anthropic’s allegations highlight that the unauthorized campaigns are increasing in both scale and complexity. The company emphasized that this situation extends beyond a single entity or region, calling for coordinated efforts among industry players, policymakers, and the broader AI community to address these challenges promptly.
According to Anthropic’s statement on Monday, Claude is not commercially accessible within China, but the rival companies reportedly found workarounds to exploit the platform. Notably, DeepSeek allegedly attempted to develop “censorship-safe” alternatives to queries sensitive to policy restrictions. MiniMax was reportedly detected during the active phase of its campaign, allowing Anthropic to monitor its tactics closely. When Anthropic released a new version of Claude, MiniMax quickly redirected nearly half its operations within 24 hours to extract capabilities from the updated system.
Anthropic warned that such unauthorized distillation techniques could pose security risks, given that less-trained models might lack safeguards against misuse, including the potential for assisting in harmful endeavors like bioweapon development. In response, Anthropic has implemented behavioral fingerprinting systems and shares intelligence with other AI companies to detect and prevent similar activities, while continuing to enhance its protective measures.
Anthropic CEO Dario Amodei has voiced concerns about the necessity of robust export controls on advanced AI chips. The company supports restrictions as a means to limit both direct training capabilities and the scale of illicit distillation efforts. This stance contrasts with tech leaders like Nvidia CEO Jensen Huang, who has argued that export controls will not deter China’s AI advancements.
In addition to these external allegations, Anthropic has faced scrutiny concerning its own training practices. Reports have detailed efforts under a project labeled “Project Panama” involving extensive scanning of copyrighted books. The company previously resolved a $1.5 billion class-action lawsuit with authors and publishers related to these practices but did not admit wrongdoing as part of the settlement.
Representatives from DeepSeek, MiniMax, and Moonshot AI have not publicly responded to Anthropic’s claims.








