Technology

OpenAI rehires from Mira Murati’s Thinking Machines Lab

$12B startup bleeds founders and security talent, Frontier AI still runs on control of compute and data

Images

Thinking Machines Lab CEO Mira Murati.
                            
                              Slaven Vlasic/Getty Images Thinking Machines Lab CEO Mira Murati. Slaven Vlasic/Getty Images businessinsider.com
OpenAI just hired back another employee from Mira Murati’s Thinking Machines Lab OpenAI just hired back another employee from Mira Murati’s Thinking Machines Lab dnyuz.com

OpenAI has hired back another employee from Thinking Machines Lab, the $12 billion startup founded by former OpenAI CTO Mira Murati—another small but telling data point in what Silicon Valley insists on calling a “talent war,” as if the scarce resource were résumés rather than leverage.

According to Business Insider, Jolene Parish has left Thinking Machines Lab to rejoin OpenAI. Parish previously spent three years at OpenAI and before that worked roughly a decade in security at Apple, per her LinkedIn profile. Her return follows earlier reported reversals: Business Insider notes that two Thinking Machines Lab cofounders—Barret Zoph and Luke Metz—have departed, alongside researcher Sam Schoenholz. The Information also reported that researcher Lia Guy rejoined OpenAI, while The Wall Street Journal previously reported that cofounder Andrew Tulloch left for Meta.

Thinking Machines Lab raised a headline-grabbing $2 billion round last year at a $12 billion valuation, Business Insider reports, and launched its first product, “Tinker,” in October. It also recruited marquee names, including Neal Wu (a competitive programming standout) and Soumith Chintala, the creator of PyTorch, now its CTO.

Yet the churn suggests an old pattern: it’s hard to build a frontier-model lab as a standalone entity when the true moats are not press releases but (1) stable access to massive training runs, (2) privileged datasets and pipelines, (3) distribution, and (4) the ability to turn “safety” into an internal bureaucracy that conveniently doubles as a permission system.

Security and infrastructure roles are especially revealing. A security veteran isn’t just another engineer; they sit near the crown jewels: model weights, training data provenance, internal evaluation harnesses, and the access-control machinery that determines who can touch what. When such people drift back to OpenAI, it hints that the gravitational pull is less about culture and more about where the keys—and the budget—actually are.

Neither OpenAI nor Thinking Machines Lab commented to Business Insider. But the incentives are legible without comment: compensation packages, compute allocations, and the implicit promise that the “safest” place to do frontier work is the institution that already owns the infrastructure—and the narrative.

The AI industry sells itself as decentralizing intelligence, while the actual practice consolidates power in a handful of organizations that can finance training at scale and gate access via policy. The “open” in OpenAI has long since become a trademark, not a description; now it’s also a recruiting strategy.