Technology

OpenAI reaches Pentagon deal for classified AI use

Altman promises technical safeguards as Anthropic is pushed out

Images

techcrunch.com

Sam Altman said late Friday that OpenAI has struck a deal allowing the US Department of Defense to use its AI models inside the department’s classified network, with what he called “technical safeguards.” The announcement, reported by TechCrunch, comes days after a public breakdown in talks between the Pentagon and Anthropic over how broadly military customers should be allowed to use frontier models.

The immediate question is what “safeguards” mean once a model is deployed into a classified environment. OpenAI’s public pitch is that it will build a “safety stack” and embed engineers with the Pentagon to help operate the system, while codifying two principles: no domestic mass surveillance, and “human responsibility” for the use of force, including autonomous weapon systems. According to Fortune’s Sharon Goldman, Altman told staff that if a model refuses a task, the government would not force OpenAI to change that behavior. That framing implies a practical control point: the vendor’s policy layer becomes part of the weapon system’s operating doctrine.

The deal lands in the wake of the Pentagon’s push for “all lawful purposes” access—language that, in Anthropic’s telling, collapses sensitive edge cases (surveillance, targeting, autonomy) into a single procurement checkbox. Anthropic tried to draw a bright line around mass domestic surveillance and fully autonomous weapons, and then found itself threatened with a “supply-chain risk” designation and an effective contractor ban, according to TechCrunch. OpenAI’s agreement offers the Pentagon a face-saving path: accept the same legal baseline but outsource the enforcement to the supplier’s technical controls.

That arrangement shifts power in two directions at once. The state gets capability on a classified network without having to build its own models or operational tooling from scratch. The vendor gets a privileged position in a market where compliance language can be converted into recurring contracts—while the hardest part of accountability is pushed into a black-box system operated under classification. When “safety” is implemented as software gates and refusal behavior, disputes become engineering tickets, not public policy debates.

Altman also said OpenAI is asking the Pentagon to offer the same terms to all AI companies, effectively proposing a default template for military AI procurement. If that template sticks, the next fight will not be whether AI is used by the defense establishment, but which company’s guardrails and update cadence become embedded in classified workflows.

Altman announced the agreement shortly before news broke that the US and Israel had begun bombing Iran. The Pentagon will be integrating “technical safeguards” into a live operational environment, not a lab exercise.