NSA Uses Anthropic AI Model Despite Pentagon Blacklist

NSA uses Anthropic's restricted AI model for cybersecurity despite Pentagon blacklist, highlighting internal government conflict over the powerful tool.

Apr 20, 2026
5 min read
Set Technobezz as preferred source in Google News
Technobezz
NSA Uses Anthropic AI Model Despite Pentagon Blacklist

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

A widening contradiction inside Washington has emerged over Anthropic's AI security tools, with the National Security Agency reportedly using the restricted Mythos model despite a Pentagon blacklist.

Two sources told Axios that NSA personnel are accessing Mythos Preview, an AI system described as "strikingly capable at computer security tasks." The model can find bugs in decades-old code and autonomously discover ways to exploit them, outperforming human hackers on some cybersecurity challenges.

This usage directly conflicts with a March decision by the Department of Defense, which designated Anthropic a supply chain risk and attempted to blacklist it from federal contracts. The Pentagon took action after the AI firm refused to loosen safeguards related to autonomous weapons and domestic surveillance capabilities.

White House officials appear to be exploring their own path forward. Last week, CEO Dario Amodei met with Treasury Secretary Scott Bessent and White House Chief of Staff Susie Wiles to discuss use of Mythos and broader safety practices.

"productive and constructive,"

a White House statement called the meeting, noting discussions about "opportunities for collaboration" and "shared approaches" for scaling the technology safely.

The timing highlights how critical this technology has become for national defense work, even amid political tensions. Only about 40 organizations worldwide have received access to Mythos due to concerns about its offensive cyber capabilities, but those with clearance have used it to scan their own systems for exploitable vulnerabilities.

The AI company has been used in high-level military work since 2024, according to court records. It is currently suing federal agencies over the supply chain risk designation, arguing it represents retaliation for refusing unfettered Pentagon access over fears of mass domestic surveillance applications.

While a federal appeals court denied its request to temporarily block the designation, many agencies continue using its tools under previous arrangements. The situation reveals how military AI policies are increasingly set through procurement contracts rather than laws or regulations, creating conflicting standards across different branches.

President Donald Trump previously directed all federal agencies to stop using Anthropic, though a federal judge temporarily blocked enforcement of that directive in March, calling the company run by "left wing nut jobs" attempting to "strong arm" defense officials. When asked about Amodei's White House visit last week, Trump said he had "no idea" about the meeting.

Share