In a dramatic standoff with the U.S. Department of Defense, AI firm Anthropic has declared it will not compromise its ethical principles regarding military applications of its technology. CEO Dario Amodei stated unequivocally that the company would rather sever ties with the Pentagon than permit uses of its AI systems that could “undermine, rather than defend, democratic values.”
The confrontation escalated during a recent meeting with Defense Secretary Pete Hegseth, who demanded Anthropic accept “any lawful use” of its tools. The discussion concluded with the Pentagon threatening to remove Anthropic from its supply chain if the company refused compliance.
At the heart of the dispute are two specific applications: mass domestic surveillance and fully autonomous weapons systems. Anthropic maintains that such uses have never been part of their contractual agreements and should not be implemented now. The company specifically objects to employing its Claude AI for these purposes, citing fundamental ethical concerns.
Despite receiving updated contract language from the Defense Department, Anthropic representatives characterized the changes as representing “virtually no progress” on preventing objectionable uses. The company asserts that proposed safeguards contained legal loopholes that would allow them to be “disregarded at will.”
The conflict has grown increasingly acrimonious, with Undersecretary for Defense Emil Michael personally attacking Amodei on social media, accusing him of seeking to “personally control the US Military” while endangering national security.
The Pentagon has threatened to invoke the Defense Production Act against Anthropic, which would grant the government authority to compel the company to meet defense requirements. Additionally, officials have suggested designating Anthropic as a “supply chain risk,” effectively barring them from government contracts.
According to sources familiar with the negotiations, tensions predate the public revelation that Claude AI was utilized in a U.S. operation to apprehend Venezuelan President Nicolás Maduro.
Amodei elaborated on the company’s concerns in a blog post, explaining that AI systems could potentially “assemble scattered, individually innocuous data into a comprehensive picture of any person’s life – automatically and at massive scale.” While supporting lawful foreign intelligence applications, Anthropic maintains that mass domestic surveillance contradicts democratic principles.
Regarding autonomous weapons, Amodei stated that current AI technology remains “simply not reliable enough” for such critical applications, emphasizing that “without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
The company has offered to collaborate with the Defense Department on research and development to enhance system reliability, but reports indicate this proposal has not been accepted. Both parties appear entrenched in their positions, setting the stage for a potentially protracted legal and ethical battle over the future of AI in defense applications.
