The Pentagon threatened to invoke the Defense Production Act and designate Anthropic as a supply chain risk last week after the AI company resisted removing safety restrictions that prevent military use of its Claude model for mass surveillance and autonomous weapons.
Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to sign documents granting unrestricted access to Claude for all lawful military applications, according to multiple reports. Officials warned they could cancel a $200 million contract awarded last July and designate Anthropic as a supply chain risk if the company refused.
Anthropic has maintained that while it will support national security missions within its models' reliable capabilities, it will not allow Claude to be used for mass surveillance of Americans or development of autonomous weapons systems.
The company's safety-first positioning has created friction with defense officials who view existing guardrails as obstacles to deployment.
Claude currently operates as the only AI system deployed within classified US defense networks under that $200 million pilot agreement. The Department of Defense has already integrated Claude into operations including drone technologies and automated targeting systems, according to reports.
The standoff escalated during what sources described as a tense Tuesday meeting at the Pentagon. Hegseth reportedly told Amodei that failure to comply could trigger action under the Defense Production Act, which allows the government to compel private companies to prioritize national security needs.
Other major AI firms including OpenAI and Elon Musk's xAI have already agreed to allow their models to be used for all lawful purposes under similar Pentagon agreements. Google is reportedly close to finalizing terms for its Gemini model, leaving Anthropic as the remaining holdout in negotiations described in multiple reports.
The dispute comes one month after US forces reportedly used Claude via partner Palantir during operations that led to the capture of Venezuelan leader Nicolás Maduro in early January. That deployment demonstrated Claude's existing integration into military systems while highlighting ongoing tensions over usage boundaries.
Anthropic's resistance centers on core ethical commitments that distinguish it from competitors in what has become a high-stakes test of whether AI companies can enforce limits on military applications of their own technology.
The company regularly publishes safety reports acknowledging potential misuse scenarios including sophisticated cyber-attacks enabled by its technology.
Pentagon officials argue guardrails should be tuned for lawful military use cases, with Undersecretary of Defense for Research and Engineering Emil Michael, who serves as the Pentagon's chief technology officer, reportedly urging Anthropic to agree to Department of Defense terms during recent discussions.
Defense leaders view unfettered access as essential for maintaining technological superiority in what they describe as a global AI arms race. The Friday deadline approached with no public resolution announced by either party as of Wednesday, with Anthropic indicating it had 'no intention' of easing restrictions familiar with the matter.















