U.S. Labels Anthropic an Unacceptable National Security Risk

The U.S. government declares Anthropic a national security risk over its refusal to allow unrestricted military use of its AI technology.

Mar 18, 2026
3 min read
Set Technobezz as preferred source in Google News
Technobezz
U.S. Labels Anthropic an Unacceptable National Security Risk

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

The Department of War (formerly the Department of Defense) has declared artificial intelligence company Anthropic an "unacceptable risk to national security," arguing in court filings that the company's ability to disable its own technology during wartime makes it too dangerous for military use.

In a 40-page legal filing submitted Tuesday, Department of Defense lawyers warned that Anthropic could "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations." The government specifically cited concerns that the San Francisco-based AI lab might act if it felt its corporate "red lines were being crossed."

The conflict stems from a $200 million contract Anthropic signed with the Pentagon last summer to deploy its Claude AI within classified systems. Negotiations broke down when Anthropic refused terms allowing "any lawful use" of its technology by the military, objecting specifically to applications involving mass surveillance of Americans and targeting decisions for lethal weapons.

Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk last month, effectively barring it from federal contracts. President Donald Trump subsequently ordered federal agencies to stop using Anthropic's technology entirely.

"This is not about speech, it's about whether a private company can dictate how our military uses technology,"

one government official stated in the filing. The Pentagon argues that AI systems remain "acutely vulnerable to manipulation" by their developers, creating unacceptable vulnerabilities in wartime scenarios where consistent performance is critical.

Anthropic CEO Dario Amodei countered these claims in a February statement, saying his company has "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." He emphasized that military decisions belong to the Department of Defense, not private contractors.

The legal battle represents more than just a contract dispute, it tests whether corporate ethics policies can override military operational requirements during national emergencies. Constitutional rights lawyer Chris Mattei told TechCrunch there has been no investigation supporting the DOD's specific concerns about potential system disabling.

"The government is relying completely on conjectural, speculative imaginings,"

Mattei argued.

Microsoft filed a friend-of-the-court brief supporting Anthropic's position, while 37 engineers and researchers from OpenAI and Google,including Google chief scientist Jeff Dean,filed a separate amicus brief in their personal capacities. These companies argue the Pentagon could have simply terminated its contract rather than labeling Anthropic a national security threat.

Anthropic faces potential revenue losses reaching billions of dollars from the supply chain designation, even though non-defense clients can continue using its services. The company maintains that seeking judicial review doesn't change its commitment to national security but represents necessary protection for its business and partners.

A federal judge will hear arguments on March 24 regarding Anthropic's request for a preliminary injunction against the ban while litigation proceeds.

Share