Defense contractors are abandoning Anthropic's Claude AI system after the Pentagon designated the company a national security risk, forcing immediate transitions to alternative providers. The Trump administration blacklisted Anthropic on Friday following months of failed negotiations over ethical restrictions in a $200 million military contract.
Defense Secretary Pete Hegseth declared any contractor doing business with the U.S. military must cease commercial activity with Anthropic, giving federal agencies six months to phase out use of Claude technology.
Ten defense-focused portfolio companies backed by venture firm J2 Ventures have already stopped using Claude for military applications and are actively replacing it with other AI models, according to managing partner Alexander Harstrick. Major contractors including Lockheed Martin are expected to remove Anthropic's technology from their supply chains.
The conflict centers on two specific restrictions Anthropic refused to remove: prohibitions against using Claude for mass domestic surveillance of American citizens and fully autonomous weapons systems. Company CEO Dario Amodei stated these use cases "have never been included in our contracts with the Department of War, and we believe they should not be included now."
Hours after the blacklist announcement, OpenAI CEO Sam Altman revealed his company had struck a deal with the Pentagon to provide AI technology for classified military networks. Altman claimed OpenAI's agreement includes safeguards preventing use for domestic surveillance or autonomous weapons without human approval.
Anthropic generates approximately 80% of its revenue from enterprise customers, making the government designation particularly damaging. The company entered Defense Department networks in late 2024 through a partnership with Palantir before securing the $200 million contract that made Claude the first major AI model deployed in classified government systems.
Multiple defense technology executives told employees last week to begin switching from Claude to other models, including open-source alternatives. One executive said their company directed staff on Monday to stop using Claude entirely until further guidance, assuming an official ban would take effect.
Palantir, which relies on government contracts for nearly 60% of its U.S. revenue, faces potential short-term operational disruptions according to Piper Sandler analysts.
"While re-establishing AI functions with a new vendor can and will happen if needed, Anthropic was a trailblazer in terms of operationalizing AI models for data-sensitive environments," analysts wrote in a client note.
The Treasury Department, State Department, and Health and Human Services have also directed employees to move off Claude following President Trump's order. Trump called Anthropic executives "Leftwing nut jobs" in a Truth Social post accusing them of trying to "STRONG-ARM the Department of War" into obeying corporate terms instead of constitutional authority.
Anthropic plans to challenge the supply chain risk designation in court, arguing Hegseth lacks statutory authority to restrict companies working with Anthropic from government business. The company maintains that under federal law, any designation would only apply specifically to defense contract usage rather than commercial relationships.
Nearly 500 OpenAI and Google employees signed an open letter supporting Anthropic's position, warning that Pentagon negotiations represent an attempt to divide AI companies through fear tactics. "They're trying to divide each company with fear that the other will give in," the letter states.
Anthropic continues supporting U.S. military operations in Iran despite the blacklist announcement while preparing for potential legal action against what it calls an "extremely flimsy" justification for supply chain risk designation.















