OpenAI, Anthropic, and Google Share Intelligence to Counter Chinese AI Copying

Leading US AI firms collaborate to counter Chinese intellectual property theft through shared intelligence on adversarial distillation techniques.

Apr 7, 2026
4 min read
Set Technobezz as preferred source in Google News
Technobezz
OpenAI, Anthropic, and Google Share Intelligence to Counter Chinese AI Copying

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Rivals OpenAI, Anthropic, and Google have begun sharing intelligence through an industry forum to detect Chinese competitors extracting proprietary AI capabilities via a technique called adversarial distillation. The three tech giants, normally locked in competition for AI dominance, are coordinating through the Frontier Model Forum they founded with Microsoft in 2023.

Their collaboration targets what US officials estimate costs Silicon Valley billions of dollars annually in lost profits, according to individuals familiar with internal findings who spoke anonymously.

Adversarial distillation involves using an existing "teacher" AI system to train a newer "student" version that replicates capabilities at far lower cost than developing original systems. While some forms of this technique are accepted within the industry, US firms view its use by Chinese entities as intellectual property theft that threatens both economic interests and national security.

OpenAI confirmed its participation in the information-sharing effort and pointed to a recent congressional memo where it accused Chinese firm DeepSeek of attempting to "free-ride on the capabilities developed by OpenAI and other US frontier labs." Google, Anthropic, and the Frontier Model Forum declined comment on the collaboration.

The urgency intensified after DeepSeek's surprise release of its R1 reasoning model in January 2025, which immediately raised suspicions about potential data extraction from US systems. Microsoft and OpenAI investigated whether the Chinese startup had improperly exfiltrated large amounts of data from their models to create R1.

In February this year, OpenAI warned US lawmakers that DeepSeek continued using increasingly sophisticated tactics to extract results despite heightened prevention efforts. The company claimed in its House Select Committee on China memo that DeepSeek relied on distillation to develop new versions of its breakthrough chatbot.

Chinese AI developers typically release open-weight systems with publicly available underlying architecture, making them cheaper to use than proprietary US offerings. This creates economic pressure on American firms that have invested hundreds of billions in data centers and infrastructure while betting customers will pay for access rather than free alternatives.

"The threat extends beyond any single company or region and poses national security risks since distilled versions often lack safety guardrails designed to prevent malicious use."

Anthropic blocked Chinese-controlled companies from using its Claude chatbot last year and identified three Chinese AI developers, DeepSeek, Moonshot, and MiniMax, as illicitly extracting capability via distillation earlier this year.

Google published a blog post noting increased extraction attempts but hasn't provided evidence quantifying China's reliance on distillation techniques. The three US developers measure attack prevalence through volumes of large-scale data requests rather than direct proof of capability replication.

Information sharing about adversarial distillation mirrors cybersecurity industry practices where firms regularly exchange data on attacks and adversary tactics to strengthen defenses. By working together through the Frontier Model Forum, these normally competitive tech giants aim to more effectively detect unauthorized extraction attempts, identify responsible parties, and prevent successful replication.

Trump administration officials signaled openness last year to fostering information sharing among AI companies through an information sharing and analysis center proposed in President Donald Trump's AI Action Plan unveiled last year.

Current collaboration remains limited due to uncertainty about what can be shared under existing antitrust guidance while countering competitive threats from China. This challenge was highlighted by OpenAI's February warning that DeepSeek continued sophisticated extraction tactics despite prevention efforts.

Share