Media

OpenAI safety process faces scrutiny in Musk lawsuit

Former board member says GPT-4 shipped via Bing in India before internal review, governance ends up in court when partners control deployment

Images

Tim Fernholz Tim Fernholz techcrunch.com

A federal courtroom in Oakland heard testimony this week about an internal OpenAI safety process that, according to a former employee, was bypassed when a version of GPT‑4 was deployed in India through Microsoft’s Bing. The witness, Rosie Campbell, told the court she joined OpenAI’s AGI readiness team in 2021 and left in 2024 after the group was disbanded, according to TechCrunch. The testimony came in a case brought by Elon Musk, who is seeking to unwind OpenAI’s shift from its non-profit origins into a large commercial company.

Campbell’s account sketches a familiar pattern in fast-growing platforms: safety teams are easiest to staff when the organisation is small and research-led, and easiest to cut when product deadlines and distribution deals start to dominate. She said OpenAI’s culture changed over time from frequent discussions about AGI and safety to a more product-focused posture, while separate safety efforts such as the Super Alignment team were also shut down around the same period. Under cross-examination she acknowledged that building advanced models requires significant funding, a point that effectively ties “safety” to whoever can write the biggest cheques for compute and data-centre capacity.

The India deployment episode matters less for what happened in one market than for what it implies about who can overrule whom. Campbell described the Deployment Safety Board as a gatekeeper that did not get to evaluate the model before it shipped via a partner with its own incentives: Bing needed competitive features, and Microsoft controlled key infrastructure and a major distribution channel. TechCrunch reports that the same incident was later cited as one of the red flags behind OpenAI’s non-profit board briefly firing CEO Sam Altman in 2023—an attempt at governance that collapsed within days when staff rallied to Altman and Microsoft pushed to restore him.

Another former board member, Tasha McCauley, testified about board-level concerns that Altman was not forthcoming, including failures to inform directors about launching ChatGPT and about potential conflicts of interest. The board’s formal mandate was to oversee the for-profit subsidiary; in practice, its leverage depended on the information it was given and on whether employees and commercial partners would tolerate enforcement. When a governance structure can be reversed by an internal petition and an external partner’s pressure, the courtroom becomes the place where oversight is attempted after the fact, through discovery and sworn testimony rather than internal controls.

OpenAI has published model evaluations and a safety framework, but declined to comment to TechCrunch on its current approach to AGI alignment. The company recently hired a new head of Preparedness from rival Anthropic, a move Altman publicly framed as helping him “sleep better tonight.” In Oakland, the argument is not about sleep but about who, concretely, gets to say no when shipping is the business model.

The witness described a safety board that could be bypassed by a partner deployment. The non-profit board that was meant to supervise the company could be bypassed by a week of employee revolt.