Technology

Pentagon calls Anthropic a national security risk

DoD filing seeks to block AI lab lawsuit over contract terms, defense procurement turns model access into a compliance test

Images

In a new filing, the Department of Defense outlined its arguments against Anthropic's lawsuit. 
                            
                              Dado Ruvic/REUTERS In a new filing, the Department of Defense outlined its arguments against Anthropic's lawsuit.  Dado Ruvic/REUTERS businessinsider.com

A Pentagon filing has labeled Anthropic a “substantial risk” to national security and the defense supply chain, in a bid to knock down the AI company’s contract lawsuit. According to Business Insider, the Department of Defense argues that Anthropic’s refusal to accept government contractual terms is not protected speech, and that the dispute belongs in procurement law rather than the First Amendment.

The immediate case is narrow—one vendor and one contract—but the language is a signal about how Washington intends to treat frontier AI providers. Once an AI lab is pulled into defense procurement, the relationship stops looking like ordinary enterprise software purchasing and starts looking like arms-length supervision: compliance obligations, audit rights, subcontractor controls, incident reporting, and restrictions on where data and model weights can be stored or accessed. Export controls and “supply chain” language also drag in chip sourcing, cloud dependencies, and foreign staff vetting. None of this is unusual in defense contracting; what is new is applying it to companies whose core asset is a fast-moving model pipeline and whose competitive advantage is often speed.

That creates a sorting mechanism. Large labs with legal teams, security-cleared staff, and a compliance apparatus can absorb government terms and treat them as a moat. Smaller competitors and open-source-adjacent firms face a choice between building a bureaucratic wrapper or staying out of the biggest buyer in the market. The government, for its part, gains leverage without writing new AI statutes: it can demand visibility through contract clauses, then make “trust” a prerequisite for future awards.

There is also an information asymmetry problem. The Pentagon cannot easily validate model safety claims on its own schedule, but it can enforce process: paperwork, attestations, and access rights. That pushes “safety” away from measurable outcomes and toward who can demonstrate the right controls. When the state becomes the reference customer, the incentive shifts from building tools that work to building tools that are certifiable.

For now, the dispute is being litigated in filings rather than procurement offices. The Pentagon has nevertheless put its position in writing: in defense AI, the right to sell may depend less on model performance than on willingness to sign the government’s terms.