Economy

OpenAI projects $665bn cumulative cash burn by 2030

Inference and training costs turn AI into capital-and-electricity business, Investors promised software margins get utility economics

OpenAI is quietly turning the “AI revolution” into a balance-sheet and power-grid problem.

According to The Decoder, citing internal financial documents reported by The Information, OpenAI has revised its projections again and now expects cumulative cash burn of about $665 billion through 2030—roughly $111 billion more than its prior estimate. The company does not expect to become cash-flow positive until 2030, projecting positive cash flow of about $39 billion that year. Meanwhile, rival Anthropic is reportedly aiming for break-even as early as 2028.

The numbers matter less as accounting trivia than as a stress test for the capital stack behind generative AI. OpenAI’s updated model implies annual cash burn accelerating from roughly $25 billion in 2026 to $57 billion in 2027, with training expenses projected near $440 billion by decade’s end. Inference—the ongoing cost of serving model outputs—has become the daily tax on “intelligence as a service”; OpenAI’s inference costs are described as having quadrupled, and the company’s adjusted gross margin is said to have fallen to 33% versus a targeted 46%.

This is the part of the AI story that doesn’t fit in product demos: marginal intelligence has a marginal cost, and it is increasingly denominated in electricity, GPUs, and scarce data-center capacity. When inference costs explode, the business stops looking like software and starts looking like a utility—except without the regulatory compact, the stable rate base, or the political permission to raise prices when demand spikes.

OpenAI’s revenue forecasts are rising, but the cost curve is rising faster. The Decoder reports 2025 revenue of $13.1 billion and projections of $30 billion in 2026 and $62 billion in 2027, with consumer subscriptions expected to remain the largest driver and enterprise ambitions scaled dramatically. The company reportedly claims 910 million weekly active users and targets 2.75 billion by 2030—numbers that read like social-network dreams stapled onto an energy-intensive industrial buildout.

So who actually carries the risk? It is shared—and blurred—across hyperscalers that provide compute, private credit and bond markets that fund data-center buildouts, and governments that increasingly rebrand subsidies and guarantees as “industrial policy” for AI competitiveness. If the economics tighten, the temptation is obvious: socialize the grid upgrades and permitting fast-tracks, privatize the upside, and call it national strategy.

The exit question is equally unromantic. If model capability continues to require ever-larger capex and opex, the natural endpoint is either (1) consolidation into a few quasi-utilities with privileged access to power and chips, (2) a pricing regime where consumers and firms pay more explicitly for inference, or (3) a political bailout narrative dressed up as “critical infrastructure.” None of these outcomes resembles the frictionless, decentralized future that AI marketing still sells.

OpenAI may be building astonishing tools. But the macro story is that the next frontier of AI is not just algorithms—it’s who gets stuck holding the bill when the cost of “thinking” scales like heavy industry.