Technology

Panthalassa raises 140 million dollars for floating AI data centres

Wave-powered offshore nodes aim to bypass land grid and cooling limits, satellite links replace fibre as the new bottleneck

Images

Photo of Jeremy Hsu Photo of Jeremy Hsu arstechnica.com

An 85-metre steel sphere designed to ride ocean swells is now being pitched as the next home for AI chips. According to Ars Technica, wave-energy startup Panthalassa has raised $140 million as part of roughly $200 million in backing to build a pilot manufacturing facility near Portland, Oregon, and to test a new prototype called Ocean-3 in the northern Pacific in 2026.

Panthalassa’s bet is that the hardest constraint on AI is no longer silicon but siting: land-based data centres struggle to secure grid connections, cooling water, permits and local acceptance, while their electricity draw makes them political targets. Putting compute offshore reframes those bottlenecks into engineering problems—corrosion, storms, remote maintenance—and a communications problem: models and prompts must travel over satellite links instead of fibre. A University of Pennsylvania computer architect, Benjamin Lee, tells Ars the concept “converts an energy transmission problem into a data transmission problem,” but he also flags the obvious trade: satellites are typically a backup, not the backbone, for high-throughput coordination between machines.

The technical sales pitch is tidy. Each node is a large floating sphere with a vertical tube below the surface; wave motion forces water up the tube into a pressurised reservoir that spins a turbine to generate electricity. The surrounding ocean doubles as a heat sink, offering cooling without the fresh-water consumption that increasingly constrains onshore facilities. Panthalassa says it wants nodes that can survive more than a decade at sea without human intervention, a requirement echoed in its job listings, and it hopes to deploy “thousands” over time.

But the operational realities are less photogenic than the renderings. If satellite bandwidth tops out at “hundreds of megabits per second per terminal,” as Lee suggests, that may be enough for real-time prompt-response workloads, yet it is a different regime from fibre-connected clusters that shuffle vast intermediate data between GPUs. Moving large model updates or datasets could fall back to periodic “sneakernet by ship,” with storage disks physically ferried to and from the nodes—an old trick in cloud logistics, now complicated by open-ocean geography. Maintenance is also a business model question: a dispersed fleet of proprietary offshore machines turns routine hardware failures into maritime operations, and the entity that owns the node controls when and how the compute comes back online.

Offshore hosting also changes jurisdiction. A floating data centre is still anchored to ports for manufacturing, launch and servicing, and it still depends on satellite providers and spectrum rules. The result is not a compute utopia outside politics, but a supply chain with more chokepoints and fewer local complainants.

Ocean-3 is scheduled for testing in 2026. The first real benchmark will not be tokens per second, but how often a steel sphere in the Pacific needs a human with a boat.