Anthropic Accuses Chinese AI Firms of Massive Data Extraction from Claude

Anthropic alleges three Chinese AI firms extracted massive data from Claude via fraudulent accounts, escalating tensions over AI competition and intellectual property.

Feb 24, 2026
3 min read
Set Technobezz as preferred source in Google News
Technobezz
Anthropic Accuses Chinese AI Firms of Massive Data Extraction from Claude

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Anthropic accused three Chinese AI developers of orchestrating industrial-scale data extraction from its Claude chatbot, revealing tensions between AI companies that want open access for training but closed systems for protection.

The company identified DeepSeek, Moonshot AI, and MiniMax as creating more than 24,000 fraudulent accounts that generated over 16 million interactions with Claude. These exchanges targeted advanced capabilities including agentic reasoning, coding, and tool integration through a technique called model distillation.

Distillation involves training smaller "student" models on outputs from larger "teacher" systems, compressing knowledge without replicating original research costs. While legitimate within single companies for deployment efficiency, cross-company use creates competitive shortcuts that bypass compute expenditure and alignment work.

DeepSeek accounted for approximately 150,000 exchanges focused on foundational reasoning and policy workarounds.

Moonshot AI generated about 3.4 million interactions targeting agentic reasoning and computer vision capabilities.

MiniMax conducted around 13 million exchanges concentrating on coding and orchestration features.

The accusations arrive during heightened geopolitical tensions over semiconductor exports to China and narrowing performance gaps between U.S. and Chinese AI systems. Experts predict parity could emerge by 2041 according to recent forecasting research.

Critics immediately questioned Anthropic's stance given its own training practices. The company recently settled a $1.5 billion copyright lawsuit after admitting to downloading millions of pirated books from Library Genesis and Pirate Library Mirror repositories.

"You trained on the open internet and then call it 'distillation attacks' when others learn from you," wrote Tory Green of infrastructure firm IO.Net on social media platform X.

Anthropic faces multiple lawsuits alleging unauthorized content scraping including cases from Reddit, authors Andrea Bartz and Charles Graeber, and Concord Music Group. Reddit's complaint described "the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness" alongside "the private face that ignores any rules that interfere with its attempts to further line its pockets."

OpenAI made similar accusations against Chinese firms earlier this month in a memorandum to the U.S. House Select Committee on China. That document warned about sophisticated multi-stage pipelines blending synthetic-data generation with large-scale cleaning operations.

Both companies framed distillation as national security threats because safety safeguards built into U.S. models could be stripped during replication processes. They warned about authoritarian regimes deploying advanced AI for cyber operations, disinformation campaigns, and surveillance systems.

Dmitri Alperovitch of Silverado Policy Accelerator noted these allegations confirm suspicions about rapid Chinese AI progress relying partly on U.S. model extraction.

The dispute shifts focus from chip export controls limiting model training to inference compute enabling large-scale replication through API querying. This creates enforcement challenges since distillation occurs through legitimate product interfaces rather than overt hacking attempts.

Anthropic pledged enhanced detection measures including tighter account verification and intelligence sharing with other labs while calling for coordinated industry responses involving cloud providers and policymakers.

Share this article

Help others discover this content