Amazon moves Trainium chips toward external sales
AWS growth accelerates while capex surges, free cash flow drops to $1.2 billion as AI buildout runs ahead of monetisation
Images
Amazon's cloud business is surging — and so is its capital spending | TechCrunch
techcrunch.com
Amazon prepares to sell Trainium chips beyond AWS, Jassy ties AI boom to record capex as free cash flow collapses, chip ambitions arrive before customers see stable unit economics
Amazon says it could begin selling its Trainium artificial-intelligence chips to external customers within two years, extending hardware that has so far been positioned mainly as an AWS advantage. The shift comes as Amazon Web Services reported net sales up 28% year-on-year to $37.6 billion, its fastest growth in 15 quarters, according to TechCrunch. On the same earnings cycle, Amazon’s trailing twelve-month free cash flow fell to $1.2 billion, down 95% from a year earlier, after a $59.3 billion jump in property and equipment purchases.
The company is describing that cash drain as the price of staying in the AI supply chain rather than merely renting it. On the earnings call, CEO Andy Jassy listed the inputs AWS must buy before it can bill customers—land, power, buildings, chips, servers, and networking gear—framing the current spending surge as “short-term cash burn for long-term payoff,” TechCrunch reports. Amazon has said it plans roughly $200 billion in capital expenditures in 2026, a number that makes the AI boom look less like a software story and more like a utilities-and-construction program financed by shareholders. Data centres are presented as 30-year assets, while the shorter-lived hardware—chips, servers and networking—turns over every five to six years, forcing repeated reinvestment to keep performance competitive.
Selling Trainium outside AWS would be a further step: it turns a cloud differentiator into a product line that must stand on its own pricing and support economics. Business Insider reports Amazon says Trainium chips now have $225 billion in revenue commitments, a figure that signals demand but also locks the company into delivery schedules, capacity planning, and performance claims that can be benchmarked against Nvidia and other suppliers. In cloud, customers pay for outcomes—instances, throughput, managed services—and the provider can hide the messy details of hardware depreciation and procurement cycles. In chips, customers compare watts, yields and roadmaps, and they can switch vendors if promised cost-per-training-run fails to materialise.
The timing also matters. Jassy argued that faster AWS growth pulls forward spending, and acknowledged that in periods of very high growth, capital expenditure can outpace revenue growth, squeezing free cash flow in the early years. That is the same logic that built AWS into a dominant platform, but it depends on a long runway of demand and on financing conditions that tolerate years of upfront build-out. If the AI cycle slows, the physical footprint remains: power contracts, buildings, and racks of equipment that only earn money when they are filled.
Amazon’s pitch is that it has lived through this movie before. This time, the bill is being paid in real time, while the company is still explaining how much of AI demand will become recurring, high-margin usage rather than one-off training spikes.