North America

Google expands Pentagon access to its AI

Anthropic refusal triggers supply-chain risk label and lawsuit, guardrails shrink when contracts are on the line

Images

techcrunch.com

Google has expanded the Pentagon’s access to its artificial intelligence systems, including use on classified networks, after rival Anthropic refused to accept the Defence Department’s terms. TechCrunch reports the DoD sought “all lawful uses” without carve-outs; Anthropic asked for restrictions aimed at blocking domestic mass surveillance and autonomous weapons use. After the refusal, the Pentagon labelled Anthropic a “supply-chain risk,” a designation more commonly associated with foreign adversaries.

That label is now in court. A judge granted Anthropic an injunction last month while litigation proceeds, according to TechCrunch, temporarily blocking the designation and giving the company room to argue that policy disagreement is being treated as a security threat. In the meantime, the market moved on: OpenAI and xAI both signed defence deals after Anthropic’s stance, and Google is now the third major AI provider to step into the gap.

Google’s agreement includes language saying it does not intend its AI to be used for domestic mass surveillance or autonomous weapons. Similar wording appears in OpenAI’s contract, TechCrunch notes, but it is unclear whether such provisions are enforceable or merely aspirational. The Pentagon’s preference for broad permissions matters because “lawful” use is a moving target: it depends on classified interpretations, executive orders, and emergency authorities that can expand faster than public oversight.

Inside Google, the deal has revived an old internal conflict. Roughly 950 employees signed an open letter urging the company not to sell AI to the Defence Department without guardrails comparable to Anthropic’s. Google did not respond to TechCrunch’s request for comment. For the Pentagon, the incentives are straightforward: buying from the most permissive vendor reduces procurement friction and keeps the programme on schedule. For vendors, being the “safe” supplier to classified networks is a durable advantage—one that can outlast consumer product cycles and advertising downturns.

Anthropic’s experience shows the cost of saying no. A company that asked for limits was treated, at least administratively, as a risk to the supply chain; competitors that accepted broad terms were rewarded with contracts. The dispute is less about whether AI will be used in defence work—everyone in the sector is positioning for that—and more about who gets to define the boundaries, and what happens to the firms that insist on writing them down.

The Pentagon wanted unrestricted access. It got it from someone else.