Anthony Ha: Molotov attack on Sam Altman home collides with New Yorker profile
OpenAI credibility fight plays out through narratives and security gates
Images
techcrunch.com
A Molotov cocktail hit Sam Altman’s San Francisco home early Friday, and police later arrested a suspect at OpenAI’s headquarters after an alleged threat to burn the building down. TechCrunch reports that Altman responded with a blog post arguing that an “incendiary” New Yorker profile about him landed in an atmosphere of “great anxiety about AI” and that he had underestimated how much public narratives can raise the temperature.
The profile, by Ronan Farrow and Andrew Marantz, draws on interviews with more than 100 people and portrays Altman as unusually driven even by Silicon Valley standards, with multiple sources questioning his trustworthiness. Altman’s reply concedes mistakes — including his handling of the internal board conflict that briefly removed him as CEO in 2023 — and frames the sector’s infighting as a “ring of power” dynamic, where the perceived prize is control over artificial general intelligence. His proposed antidote is to “share the technology broadly” so that “no one” holds that ring.
This is not a dispute about one executive’s temperament. It is a political economy problem that AI companies have made unavoidable: they are building systems with labour-market consequences and security implications, but governance is largely reputational, mediated through press profiles, investor decks and internal boards with limited public accountability. When the enforcement mechanism becomes online outrage and physical intimidation, the incentives tilt toward secrecy and private security rather than transparency — and toward firms that can absorb that cost.
The episode also exposes a quiet asymmetry. If a factory underpays workers or a bank mis-sells loans, regulators can demand records and impose penalties. When a frontier model is trained, deployed and iterated at speed, outsiders often learn about trade-offs — safety constraints, red-team results, commercial pressures — through leaks, selective disclosures, or long-form journalism. That leaves a narrow channel between public concern and corporate decision-making, and it is easily clogged by personality drama.
Altman’s call to “de-escalate the rhetoric and tactics” is a reasonable plea, but it also underlines the industry’s dependence on persuasion as a substitute for enforceable rules. The New Yorker profile is not a security review; the Molotov cocktail is not democratic oversight. Yet both now sit in the same causal chain, shaping how one of the world’s most powerful AI firms presents itself and how aggressively it locks down.
On Friday morning, the fire was small enough to be extinguished quickly, and no one was hurt. The most concrete change so far is that OpenAI says it has increased security around its leadership and facilities.