Anthropic CEO Dario Amodei has returned to negotiations with the Pentagon just days after talks collapsed over restrictions on how the military can use Claude AI models. The renewed discussions with Defense Department officials represent a last-ditch effort to salvage a $200 million contract that made Claude the first major AI system deployed on classified government networks.
The original deal fell apart late last week when Anthropic insisted on guarantees that its technology would not be used for domestic surveillance or autonomous weapons systems. According to a leaked memo, the Pentagon offered to accept all of Anthropic's terms except one: removal of language prohibiting "analysis of bulk acquired data." Amodei told staff this phrase "exactly matched this scenario we were most worried about."
Defense Secretary Pete Hegseth had demanded unfettered access to AI technology for "any lawful use," while President Donald Trump directed federal agencies to stop using Anthropic tools following the breakdown. Hegseth also threatened to designate the company as a national security supply-chain risk, a category typically reserved for foreign entities.
Amodei is now negotiating with Emil Michael, under-secretary of defense for research and engineering, who publicly attacked the CEO last week as a "liar" with a "God complex."
The personal animosity adds another layer of complexity to discussions that could determine whether Anthropic maintains access to defense contracts or faces industry-wide exclusion. The timing intensified when OpenAI announced its own Pentagon agreement hours after the White House criticized Anthropic. OpenAI CEO Sam Altman later said his company "shouldn't have rushed" its deal and called for officials not to label Anthropic a security risk.
Founded in 2021 by former OpenAI researchers who left over disagreements about safety priorities, Anthropic markets itself as a "safety-first" alternative in an industry where competitors have agreed to less restrictive military terms.















