Technology

AI tools compress strike decisions in Iran war

Military kill chain shifts from deliberation to audit, Accountability disperses as tempo rises

Images

Academics say AI is collapsing the time required for military decision-making. Photograph: Majid Asgaripour/Reuters Academics say AI is collapsing the time required for military decision-making. Photograph: Majid Asgaripour/Reuters theguardian.com
After OpenAI finalized its Pentagon contract, employees began sharing their views online.
                            
                                  
                              Florian Gaertner/Photothek via Getty Images After OpenAI finalized its Pentagon contract, employees began sharing their views online. Florian Gaertner/Photothek via Getty Images businessinsider.com

In the first 12 hours of the US-Israeli strikes on Iran, almost 900 attacks were launched, according to The Guardian, as academics warned that new AI-assisted targeting systems are shrinking the time between identifying a target and firing a weapon. The same reporting says Anthropic’s Claude was used by the US military in the strike planning process, part of a broader effort to “shorten the kill chain” from intelligence to legal approval to launch.

The core change is not that machines have started wars on their own, but that they can now propose complete strike packages faster than people can interrogate them. The Guardian describes systems that fuse drone video, intercepted communications and human intelligence, then rank targets and recommend weapons while tracking stockpiles and past performance. Palantir-built tooling, the paper reports, also uses “automated reasoning” to evaluate the legal basis for a strike. That shifts the role of the human decision-maker from building a case to auditing a pre-built output — and auditing is the job most likely to be compressed when the operational tempo rises.

Researchers quoted by The Guardian call this “decision compression”: the faster the system can generate plausible plans, the more pressure there is to approve them quickly, especially when adversaries can move assets or retaliate within minutes. David Leslie at Queen Mary University of London warns of “cognitive off-loading”, where the human feels less personal responsibility because the hard work of analysis has been done elsewhere.

The governance problem is that responsibility fragments exactly when consequences concentrate. If an AI-assisted recommendation leads to a catastrophic mistake, the chain of accountability runs through at least four actors: the military operator who clicked “approve”, the legal officer who signed off, the contractor that integrated the system, and the model provider whose weights shaped the output. Each can plausibly argue they were only one step in a process.

That ambiguity is now colliding with the commercial politics of model access. Business Insider reports that OpenAI employees and former staff are publicly debating the company’s Pentagon deal, with some calling for more disclosure and others insisting the contract contains strong restrictions against domestic mass surveillance and fully autonomous lethal weapons. Altman has said OpenAI is working with the Pentagon to add language after backlash. The internal dispute is not just about ethics; it is about who is left holding the bag when a classified system fails in public.

On Saturday, Iranian state media said 165 people, many children, were killed when a missile hit a school in southern Iran, apparently near a military barracks; the UN called it a grave violation of humanitarian law, and the US military said it was looking into the reports. The strike is now being investigated after the targeting cycle was advertised as faster than ever.