North America

Trump administration blacklists Anthropic from federal use

Pentagon cites supply chain risk after Claude contract dispute, Six-month phase-out collides with immediate contractor ban

Images

"We don't need it, we don't want it, and will not do business with them again," Trump said of Anthropic.
                            
                              Pool photo by Kenny Holston/The New York Times. "We don't need it, we don't want it, and will not do business with them again," Trump said of Anthropic. Pool photo by Kenny Holston/The New York Times. businessinsider.com

The Trump administration is moving to bar Anthropic from US government work after a contract dispute with the Pentagon over how the military may use the company’s Claude model. President Donald Trump said on Friday he is directing federal agencies to stop using Anthropic’s technology, with a six-month phase-out for departments already using it, according to Business Insider. Hours later, Defense Secretary Pete Hegseth said the Department of Defense would designate Anthropic a “supply-chain risk” and would prohibit any contractor, supplier or partner doing business with the US military from conducting commercial activity with the company.

The immediate fight is over contract language. Anthropic CEO Dario Amodei wrote that the Defense Department sought terms allowing “any lawful use” of Claude, language he said would give the military broad discretion. Business Insider reports the company refused to drop safeguards related to mass surveillance of US citizens and autonomous weapons. Hegseth had set a deadline for Anthropic to accept the military’s terms and warned that the government could invoke the Defense Production Act, a wartime statute that expands presidential control over industrial resources.

The broader issue is how Washington is trying to turn frontier AI firms into controllable infrastructure. Labelling a domestic supplier a national-security risk is not just a procurement decision; it is a signal to the rest of the contractor ecosystem that model access, usage rights and compliance can be set by the state and enforced through deplatforming from federal demand. The Pentagon does not have to win a court case to change behaviour; it can shift incentives by making association with a targeted firm costly for everyone else.

That leverage matters because federal contracts are not simply revenue. They can shape product roadmaps, security architectures, and the default compliance posture for regulated industries that follow government standards. If “any lawful use” becomes the baseline, companies face a choice between building capabilities for state clients that would be unacceptable in consumer markets, or declining and watching competitors inherit both the contracts and the policy influence.

Anthropic’s position is also a test of how much independence any AI lab can sustain once it is treated as strategic capacity. A refusal may protect the company’s internal red lines, but a blacklist threatens downstream partnerships and enterprise sales if contractors fear secondary restrictions. Meanwhile, rival vendors positioned as more cooperative—whether OpenAI via Microsoft’s government channels or defence-adjacent firms that already live inside classified procurement—stand to absorb displaced workloads without having to fight the same contract terms first.

The administration’s order gives agencies six months to unwind their use of Anthropic. The Pentagon’s proposed ban would take effect immediately for anyone who wants to keep doing business with the US military.