North America

OpenAI weighs reporting Canadian school shooting suspect to RCMP

Account flagged months earlier but deemed not imminent, AI safety drifts into private policing while platforms still break themselves

Images

OpenAI banned Jesse Van Rootselaar’s account in June 2025 for violating its usage policy. Photograph: Jaque Silva/NurPhoto via Getty Images OpenAI banned Jesse Van Rootselaar’s account in June 2025 for violating its usage policy. Photograph: Jaque Silva/NurPhoto via Getty Images theguardian.com

OpenAI, the company that sells itself as the world’s most helpful autocomplete, says it considered calling Canadian police about a user months before that user carried out one of Canada’s worst school shootings in years. At the same time, Amazon’s cloud division reportedly suffered two outages triggered by its own AI tooling. Put together, the stories sketch a trajectory: “safety” systems that quietly morph into private policing infrastructure—while remaining brittle enough to take down the platforms they’re meant to protect.

According to the Associated Press via The Guardian, OpenAI flagged the account of Jesse Van Rootselaar in June 2025 through its abuse-detection systems for “furtherance of violent activities.” The company banned the account for violating its usage policy. It also debated whether to refer the case to the Royal Canadian Mounted Police (RCMP), but decided the activity did not meet its threshold: an “imminent and credible risk of serious physical harm.”

After the shooting in Tumbler Ridge, British Columbia—where police say the 18-year-old killed eight people and later died by suicide—OpenAI says it proactively contacted the RCMP with information about the suspect’s use of ChatGPT. The Wall Street Journal first reported OpenAI’s disclosure, per The Guardian.

The civil-liberties problem is not that a private firm might tip off police; it’s the incentives and the opacity. OpenAI’s internal standard—imminent, credible, serious harm—sounds like a constitutional law seminar. It’s a product policy decision made by employees who are neither judges nor juries, using signals outsiders can’t audit. If the company refers too aggressively, it becomes a privatized pre-crime hotline. If it refers too conservatively, it gets blamed after the fact. Either way, the public is asked to trust a black box.

That black box is also being bolted onto critical infrastructure. Zero Hedge, citing a report, says Amazon Web Services was “taken down twice” by its own AI tools—an example of automated risk controls that can fail closed at hyperscale. Even if the details are thin (Zero Hedge is not exactly a change-control ticket), the direction is credible: more AI-driven automation in operations means more single points of algorithmic failure.

The same industry lobbying for “AI regulation” is busily building systems that already behave like regulators—deciding who is suspicious, what counts as imminent, and when to escalate to the state. Meanwhile, the cloud that runs half the internet is increasingly governed by software that can, on a bad day, police itself into an outage.

In the long run, the question isn’t whether AI can spot a threat, or whether it can keep a data center stable. It’s who gets to define “threat,” under what process, and what happens when the automated guardians start pulling the plug—on users or on the infrastructure itself.