Judge blocks Pentagon supply chain risk label against Anthropic
AI procurement fight turns national security into a vendor blacklist tool, injunction restores contracts without forcing purchases
Images
Anthropic CEO Dario Amodei looks smiles as he appears to be walking.
nbcnews.com
Anthropic wins injunction against Trump administration over Defense Department saga | TechCrunch
techcrunch.com
A federal judge in California has ordered the Trump administration to back away from a Pentagon directive that labeled Anthropic a “supply chain risk” and urged agencies to cut ties with the AI company. Judge Rita F. Lin issued the injunction in federal court in San Francisco on Thursday, saying the government’s actions were likely unlawful and lacked due process, according to NBC News. Anthropic, maker of the Claude chatbot and a major supplier of AI tools to government, had sued after Defense Secretary Pete Hegseth announced the designation in late February.
The label matters less for what it says about Anthropic than for how it works. “Supply chain risk” is normally a procurement control aimed at foreign hardware, telecom vendors, or components that can be tampered with. Applied to a U.S. software firm over contract terms, it becomes a fast-track method to punish a vendor without proving breach, fraud, or technical compromise. NBC News reports Lin found “no legitimate basis” to infer that Anthropic’s insistence on usage restrictions made it a potential saboteur, and noted the lack of “meaningful notice” before the government publicly moved to bar the company.
The underlying dispute is about who controls downstream use. Anthropic has pushed for limits on military and intelligence applications, including bans on autonomous weapons and mass domestic surveillance, while the Defense Department objected to those constraints. In a normal market, a customer that dislikes a supplier’s terms switches suppliers and pays the switching costs. In Washington’s market, the customer can also influence standards, security clearances, and eligibility rules—then present the outcome as a neutral “risk” determination.
That structure creates predictable lobbying incentives. Competitors benefit if the government can remove a rival by administrative label rather than by price or performance. Agencies benefit because security classifications and procurement bans are hard to challenge quickly, and they shift the argument away from the contract and toward patriotism. Anthropic benefits from litigating because the designation threatens not just future revenue but also reputational standing with other enterprise buyers who treat federal security signals as a proxy for trust.
Lin’s order does not force the Pentagon to keep buying Anthropic; it only blocks the government from using the “supply chain risk” mechanism as a blanket blacklist while the case proceeds. The judge paused her own order for a week to allow an appeal, according to NBC News.
Anthropic built its business on being the AI vendor willing to say “no” to certain uses. The Pentagon’s attempt to treat that refusal as a security threat was stopped by a federal court order on Thursday night.