OpenAI secured a classified Pentagon contract within hours of rival Anthropic being banned from federal work over ethical objections to military AI use. The rushed agreement allows OpenAI models to operate on Department of Defense classified networks for logistics, intelligence analysis, cybersecurity, and operational planning.
President Donald Trump ordered all federal agencies to stop using Anthropic technology on February 28 after negotiations collapsed over two specific restrictions. Anthropic had refused to permit its Claude AI system for mass surveillance of American citizens or deployment in fully autonomous weapons without human oversight.
Defense Secretary Pete Hegseth designated Anthropic as a supply chain risk immediately following the breakdown.
“America’s warfighters will never be held hostage by the ideological whims of Big Tech,”
he posted on social media. The Pentagon cancelled its existing $200 million contract with Anthropic that had been awarded in 2025 alongside OpenAI, xAI, and Google.
OpenAI CEO Sam Altman acknowledged the timing problems publicly.
“The deal was definitely rushed,”
he said, adding that “the optics don’t look good.” The company announced its Pentagon agreement late Friday night, just hours after Trump’s directive against Anthropic. The new contract permits OpenAI technology for “all lawful purposes” within national security operations while including some limitations on specific applications. Technical safeguards will ensure models behave as intended during military use, according to company statements.
Nearly 500 employees from OpenAI and Google signed an open letter opposing the arrangement.
“We will not be divided,”
they wrote, referencing Pentagon tactics described in internal communications.
“They’re trying to divide each company with fear that the other will give in.”
Anthropic’s ethical stand came during negotiations for what would have been a $200 million extension of its existing Pentagon work. Company executives maintained that current frontier AI models lack sufficient reliability for fully autonomous weapons deployment.
Legal experts question whether Hegseth possesses statutory authority to designate a domestic company as a supply chain risk for refusing certain contractual terms. Anthropic has indicated it will challenge the designation in court if pursued further.
The controversy unfolded alongside U.S. military strikes against Iran that reportedly used Claude for intelligence assessments and target identification despite the newly announced ban. This suggests operational phase-out complications during the six-month transition period granted by the Pentagon.
OpenAI recently closed a $110 billion funding round valuing the company at $840 billion, while Anthropic completed a $30 billion venture round at a $380 billion valuation earlier this month.















