Anthropic CEO Dario Amodei has publicly refused demands from the US Department of War to remove safety restrictions that prevent its Claude AI system from powering fully autonomous weapons or conducting mass domestic surveillance. The standoff comes despite Anthropic holding a two-year $200 million defense contract awarded earlier this year through the Pentagon's Chief Digital and Artificial Intelligence Office.
In a statement published on Anthropic's website, Amodei declared the company "cannot in good conscience accede to their request" to eliminate safeguards against weaponization and surveillance uses. The Department of War had reportedly given Anthropic until 5:01 PM on February 27 to comply with demands for unrestricted access, threatening to terminate contracts with companies that refuse to permit "any lawful use" of their AI systems.
"We will not knowingly provide a product that puts America's warfighters and civilians at risk,"
Amodei wrote, acknowledging that while partially autonomous weapons used in conflicts like Ukraine are "vital to the defense of democracy," current frontier AI systems lack sufficient reliability for fully autonomous deployment without human oversight. The CEO emphasized that using AI for mass domestic surveillance conflicts with democratic values, even as his company continues supporting lawful foreign intelligence operations.
The confrontation emerges just months after Anthropic expanded its government offerings, becoming the first frontier AI company to deploy models within classified US government networks and national laboratories. The company had been working proactively with defense agencies, developing custom Claude Gov models specifically for national security customers before the ethical dispute escalated.
Should the Department of War decide to terminate its relationship with Anthropic, the company has committed to enabling a smooth transition to alternative providers while maintaining its models available under current restrictions "for as long as required."
This principled stance occurs as the Pentagon investigates defense contractors' reliance on Anthropic's services, suggesting broader concerns about AI dependency within military supply chains.
Anthropic's position reflects growing tensions between commercial AI developers and government agencies seeking advanced capabilities for national security applications. The company maintains it will continue supporting defense needs through properly guarded implementations while drawing clear boundaries around autonomous lethal systems and warrantless surveillance.















