Media

Microsoft backs Anthropic lawsuit against Pentagon blacklisting

supply chain risk label threatens contractor AI stacks, government grants itself transition time but not vendors

Images

Microsoft CEO Satya Nadella's company said the Pentagon's actions against Anthropic "put at risk the very AI ecosystem that the Administration has helped to champion."
                            
                              Sven Hoppe/picture alliance via Getty Images Microsoft CEO Satya Nadella's company said the Pentagon's actions against Anthropic "put at risk the very AI ecosystem that the Administration has helped to champion." Sven Hoppe/picture alliance via Getty Images businessinsider.com

Microsoft has asked a US federal court to temporarily block a Pentagon move that would effectively force defence contractors to stop using Anthropic’s AI models, escalating a dispute that has quickly become a proxy fight over who gets to be “safe” enough to supply the state.

According to Business Insider, Microsoft filed a proposed amicus brief backing Anthropic after the company sued Defence Secretary Pete Hegseth, the Pentagon and other federal agencies to halt a designation that labels Anthropic and its products a “supply chain risk”. The Pentagon has also told Anthropic it is taking the “unprecedented step” of requiring contractors to cease doing business with the firm, while President Donald Trump has ordered federal agencies to phase out Anthropic’s models within six months.

That timeline is at the heart of Microsoft’s complaint. The government gave itself half a year to unwind internal dependence, but the Pentagon determination offers no equivalent transition period for contractors who have built Anthropic into their own products and deliver those systems to the military. Microsoft argues that immediate implementation would impose “substantial and wide-ranging costs and risks” on contractors—costs that are not confined to Anthropic, but propagate through the vendor stack as integrations, compliance documentation and procurement approvals are rewritten.

The fight also exposes a new chokepoint: “supply chain” has become a policy instrument for software services that are not physical components. If an AI model can be treated like a tainted part, the state can reshape the market by administrative label rather than by proving wrongdoing in court. For large platforms, the danger is not just losing a single supplier, but having customers—especially public-sector customers—treat a designation as a de facto ban across products that sit on top of that model.

Silicon Valley is lining up accordingly. Business Insider reports that OpenAI CEO Sam Altman urged the Pentagon not to proceed, and that dozens of OpenAI and Google employees backed a separate brief opposing the action. Microsoft, which pledged up to $5bn for Anthropic in November as part of an expanded partnership involving Nvidia, is unusually explicit about its exposure: Anthropic is described as a “foundational layer” in products Microsoft sells to government clients.

For now, the concrete issue is whether the court pauses the designation long enough for an orderly migration. The broader issue is whether access to “approved” machine intelligence becomes a licensing regime administered through procurement memos.

The Pentagon’s determination gives itself six months to move off Anthropic. Contractors are being told to stop first and ask questions later.