Science

Shield AI flies Anduril Fury drone with Hivemind autonomy

Air Force CCA program shifts competition from airframes to software integration, test milestones still leave edge cases unpriced

Images

Hivemind flew Anduril's YFQ-44A aircraft for the first time in a test over the Mojave Desert.
                            
                              Shield AI Hivemind flew Anduril's YFQ-44A aircraft for the first time in a test over the Mojave Desert. Shield AI businessinsider.com

Shield AI says its “Hivemind” autonomy software has flown Anduril’s YFQ-44A Fury drone for the first time during a test over California’s Mojave Desert, a milestone in the US Air Force’s Collaborative Combat Aircraft (CCA) effort, according to Business Insider. The company said the AI pilot completed required test points including basic maneuvers and mid-mission updates.

The headline claim—an AI “pilot” flying a combat aircraft—turns on what autonomy means in practice. In this program, the Air Force is not trying to replace a cockpit with a large language model; it is trying to package perception, sensor fusion, navigation, and decision logic into software that can execute a mission when communications are degraded and when GPS may be unreliable. Shield AI markets Hivemind as distinct from conventional autopilot: it is supposed to re-plan routes in real time, respond to obstacles, and keep pursuing objectives rather than merely holding altitude and heading.

Whether this is a breakthrough or a well-produced demo depends on the constraints. “Met all required test points” can mean the aircraft stayed inside a carefully prepared flight envelope, with safety pilots and pre-approved behaviors limiting what the autonomy stack is allowed to do. Mid-mission updates are also ambiguous: they can represent robust autonomy under changing objectives, or they can represent supervised changes transmitted over a clean test range link. The hardest part of autonomy is rarely a single maneuver; it is the long tail of edge cases—sensor dropouts, conflicting inputs, unexpected aircraft states, and interaction with other aircraft—where software behaves correctly 99 times and catastrophically on the 100th.

The CCA program is designed around that trade: pushing capability into software while keeping humans in the loop at the mission level. The Air Force’s vision is uncrewed “wingmen” operating alongside crewed fighters, with some mix of autonomous behavior and human direction. Business Insider notes that earlier this month a stand-in CCA aircraft communicated and flew with an F-22, another step in proving system integration.

That integration is where costs and risks concentrate. Autonomy has to work with the aircraft’s flight controls, sensors, datalinks, and weapons interfaces, and it has to do so across multiple contractor ecosystems. The more the Air Force treats autonomy as a modular “app,” the more it must standardize interfaces; the more it allows bespoke integrations, the more it inherits fragile one-off systems that cannot be updated quickly.

Shield AI points to past demonstrations, including the X-62A VISTA (a modified F-16) flying AI-enabled dogfight tests in 2024, though the Air Force has not publicly disclosed results. The program’s direction is clear even without a scorecard: the competition is shifting from airframes to training data, simulation environments, and the ability to certify software behavior under uncertainty.

Hivemind’s first Fury flight took place over the Mojave, in a test regime built to make failure survivable. The next question is how much of that autonomy remains when the range safety net is removed.