OpenAI rewrites Pentagon AI contract after surveillance backlash
Altman adds domestic spying limits and NSA carve-out language, Protest schedules outpace procurement oversight
Images
OpenAI says it will amend a newly announced Pentagon contract after a weekend of employee petitions and street protests raised fears that its models could be used for mass domestic surveillance.
In a post on X cited by Business Insider, chief executive Sam Altman said the company is adding language stating that, consistent with US law including the Fourth Amendment, its AI “shall not be intentionally used for domestic surveillance” of US persons. Altman also wrote that the Defense Department affirmed OpenAI services will not be used by “war intelligence agencies” such as the NSA without a follow-on modification to the contract.
The episode shows where the real leverage sits when AI suppliers sell into national-security procurement. The product is not only model access on classified networks, but the liability boundary around how that capability is used. Anthropic, OpenAI’s closest rival, had publicly drawn “red lines” around mass surveillance and fully autonomous weapons. According to Business Insider, the Pentagon deal landed amid a dispute between the Department and Anthropic, and a day after US strikes on Iran sharpened public attention to military AI.
Altman’s memo frames the revision as a communications failure—he said the company should not have “rushed” the deal and that it looked “opportunistic and sloppy.” But the specifics he highlighted are contractual: which agencies can use the system, for what purpose, and under what modification process. Those terms matter because the downside of misuse—political blowback, legal challenges, and reputational damage—does not stay neatly inside the government’s perimeter. It spills into the vendor’s workforce, its consumer brand, and its ability to keep selling to other governments and regulated industries.
The market reaction inside the AI sector is also visible in the labour channel. Business Insider reports that nearly 500 OpenAI and Google employees signed an open letter supporting Anthropic’s stance. That is a reminder that, in a tight talent market, internal dissent becomes a cost centre that procurement officers can ignore but executives cannot. The same dynamic applies to customers: “policy guardrails” are partly a compliance promise sold to downstream buyers who need plausible assurance that adopting a model will not drag them into future investigations.
OpenAI’s wording is narrow—“not intentionally” used for domestic surveillance—and hinges on “applicable laws,” which are interpreted by the same institutions seeking the tools. The company also points to a change-control mechanism: intelligence use would require a contract modification. That shifts the question from whether surveillance happens to who must sign the paperwork when it does.
On Tuesday, protesters are scheduled to demonstrate again, according to Business Insider. The contract language OpenAI is adding is, for now, the most concrete deliverable anyone has seen.