In an unprecedented show of solidarity, America’s technology behemoths including Google, Amazon, Apple, and Microsoft have thrown their collective weight behind artificial intelligence firm Anthropic in its high-stakes legal battle against the Trump administration. The controversy centers on Defense Secretary Pete Hegseth’s extraordinary designation of Anthropic as a “supply chain risk”—a move tech giants warn could establish dangerous precedents for governmental overreach and retaliation against private enterprises.
The legal confrontation erupted after Anthropic refused to comply with administration demands to remove contractual provisions prohibiting the use of its AI technology in domestic mass surveillance programs and autonomous weapons systems. This principled stand triggered what Microsoft described in court filings as potentially “broad negative ramifications for the entire technology sector,” with the software giant emphasizing its agreement that AI tools “should not be used to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war.”
A coalition of influential organizations including the Chamber of Progress—a tech advocacy group representing Google, Apple, Amazon, Nvidia and other major players—filed a joint amicus brief expressing grave concerns about the administration’s punitive actions. The brief characterized the Defense Department’s labeling of Anthropic as “a potentially ruinous sanction” that effectively constitutes little more than a “temper tantrum” by government officials.
The legal documents reveal startling allegations that the Defense Department actively contacted Anthropic’s customers, urging them to sever business relationships with the AI company. During Tuesday’s court hearing in San Francisco, Department of Justice representatives declined to deny these actions or commit to ceasing further retaliation.
The conflict reached its boiling point in February when Anthropic CEO Dario Amodei publicly refused to eliminate ethical guardrails from government contracts, prompting President Trump to announce on his Truth Social platform that Anthropic’s Claude AI—in use by government agencies since 2024—would be completely removed from federal operations. Secretary Hegseth subsequently issued the unprecedented “supply chain risk” designation, marking the first time an American company has received such a label.
Notably absent from the coalition supporting Anthropic is Meta, which departed the Chamber of Progress in 2025 after years of membership. This divergence highlights the complex political landscape where tech executives have largely supported and donated to Trump since his return to office, yet found the administration’s actions against Anthropic sufficiently alarming to warrant unified opposition.
The case has attracted support from nearly 40 OpenAI and Google employees, along with two dozen former high-ranking military officials who warned the government’s actions “send the message that investing in national security carries the risk of capricious retaliation or disproportionate punishment for voicing disagreement.”
Legal experts anticipate this landmark case may establish critical precedents regarding corporate free speech rights, ethical boundaries in government contracting, and the appropriate limits of executive power in regulating emerging technologies. As expressed by Foundation for Individual Rights and Expression counsel John Coleman: “A free society requires no less” than companies staying true to their principles against federal pressure.
