North America

DeepMind CEO warns memory shortage is AI choke point

HBM supply and packaging capacity concentrate leverage in Samsung SK Hynix Micron, model builders become procurement takers

Images

Google's AI boss Demis Hassabis said the memory market came down to "a few suppliers of a few key components."
                            
                              PONTUS LUNDAHL/TT NEWS AGENCY/AFP via Getty Images Google's AI boss Demis Hassabis said the memory market came down to "a few suppliers of a few key components." PONTUS LUNDAHL/TT NEWS AGENCY/AFP via Getty Images businessinsider.com

Google DeepMind CEO Demis Hassabis has put a blunt name on the AI industry’s least glamorous bottleneck: memory. “The whole supply chain is kind of strained,” he told CNBC, arguing that scarcity in key memory components is now an AI “choke point,” Business Insider reports.

Google does not lack chips—Google designs its own Tensor Processing Units (TPUs) and can buy plenty of compute. The choke point is that scaling frontier models is increasingly limited by the ability to feed accelerators with high‑bandwidth memory (HBM) and the packaging capacity needed to assemble dense, high‑performance systems. You can have all the silicon in the world; without enough fast memory close to the die, your expensive compute turns into a space heater.

Hassabis’ complaint is also a quiet map of power. Business Insider notes he pointed to “a few suppliers of a few key components.” On the memory side, that means Samsung, SK Hynix, and Micron—an oligopoly that suddenly looks less like a boring commodity business and more like the upstream cartel for the AI age.

HBM is the specific prize. Large language model training and inference want bandwidth and capacity, not just raw FLOPs, and HBM sits at the intersection of memory manufacturing, advanced packaging, and scarce production tooling. Constraints here ripple outward: hyperscalers hoard supply; consumer electronics firms get deprioritized; prices rise; and “AI strategy” becomes partly a procurement strategy.

Google’s response is to spend through it. Business Insider cites Alphabet projecting 2026 capex of $175–$185 billion. That’s not a science budget; it’s industrial policy by balance sheet—an attempt to buy priority in a supply chain that now dictates how quickly models can be trained, served, and iterated.

The market implication is uncomfortable for anyone selling “AI abundance.” If memory is the choke point, then the marginal power shifts away from model builders and toward memory producers and the foundry/packaging stack that can actually deliver HBM‑rich systems at scale. The next OPEC is unlikely to be a GPU vendor; it may be the companies that control the memory pipeline—and the capacity decisions that decide who gets to run the next training run and who gets to write another blog post about it instead.