A copyright takedown request targeting thousands of leaked source code copies reveals stark hypocrisy at AI developer Anthropic, which built its models using millions of pirated books.
The company issued takedown notices for more than 8,000 copies of Claude Code's source code after accidentally including a source map file in version 2.1.88 of its npm package. That file pointed directly to where the proprietary code was stored online, allowing researchers to download and share it across GitHub repositories.
Anthropic later narrowed its request to 96 copies, claiming the initial notice covered more accounts than intended according to Wall Street Journal reporting cited by multiple sources. The leak exposed engineering techniques that make Claude Code operate as an autonomous agent, including digital infrastructure coders call a harness and other implementation details that distinguish the AI model from competitors.
This sudden embrace of copyright enforcement comes after Anthropic settled a $1.5 billion lawsuit over using pirated books to train its models. The company downloaded millions of volumes from shadow libraries like LibGen and Pirate Library Mirror during its early development phase.
Anthropic cofounder Ben Mann celebrated one pirate site's launch in internal messages, writing "just in time!!!" with a link to employees according to court documents. The company also ran Project Panama, a secret initiative that scanned and destroyed millions of physical books using industrial cutting machinery.
"We don't want it to be known that we are working on this,"
Internal planning documents from 2024 stated via Washington Post reporting.
Security researchers examining the leaked code discovered a vulnerability that allows bypassing security checks when Claude Code processes more than 50 subcommands in a single instruction. After the 50th subcommand, compute-intensive security analysis gets overridden and users are simply asked whether they want to proceed.
AI security firm Adversa found documentation for this vulnerability within the leaked source code itself. Anthropic has already developed a fix called tree-sitter parser but hasn't enabled it in public builds customers currently use.
The incident raises questions about whether AI coding tools contributed to the leak, given similar high-profile errors at Amazon and Meta alongside Anthropic's frequent boasts about building models with its own AI coding systems. The company officially attributes the mistake solely to "human error."
No customer data or internal mathematical weights determining how Claude learns were exposed in the leak.















