Technology

OpenAI flags and bans shooter-linked ChatGPT account months before Canada massacre

Platform safety heuristics collide with due process and surveillance incentives, Moral policing becomes product feature without guarantees

Images

globalnews.ca
Click to play video: 'Remembering Tumbler Ridge shooting victims' Click to play video: 'Remembering Tumbler Ridge shooting victims' globalnews.ca
globalnews.ca
globalnews.ca
Click to play video: 'Questions over return to learning for Tumbler Ridge students as community grieves' Click to play video: 'Questions over return to learning for Tumbler Ridge students as community grieves' globalnews.ca

OpenAI says it flagged and banned a ChatGPT account linked to a Canadian school shooter seven months before the attack—an announcement that sounds like prevention, until you examine what “flagging” can actually mean.

According to Global News, OpenAI confirmed that an account connected to Jesse VanRootelsar was identified in June 2025 through “abuse and detection and enforcement efforts” and subsequently banned for violating usage policy. On Feb. 10, VanRootelsar killed eight people—family members at home, then students and an educator at Tumbler Ridge Secondary School—before being found dead from an apparent self-inflicted gunshot wound, police said. OpenAI says it contacted the RCMP after the incident and will support the investigation.

The key detail is what OpenAI did not do in June. The company says it considered referring the account to law enforcement but decided the activity did not meet a higher threshold—specifically, it did not indicate an “imminent and credible risk” or planning of serious physical harm. This is the unavoidable technical and institutional bind for LLM providers: they are expected to be tools, moral referees, and pre-crime sensors, while being structurally incapable of guaranteeing any of the above.

From a systems perspective, “flagging” is typically a mix of heuristics: prompt patterns associated with self-harm or violence, attempts to bypass safeguards, repeated requests for weapon construction, or contextual markers (time, repetition, escalation). But without invasive client surveillance—monitoring a user across devices, accounts, and platforms—any model provider is mostly limited to what the user types into that one service. Even then, the signal is noisy: fiction writing, curiosity, research, roleplay, and genuine intent can look identical as text.

That leaves platforms with two bad options. Over-enforce and you get false positives, distressed families, and a private company effectively dispatching police based on probabilistic text classification. Under-enforce and, after the inevitable tragedy, the same company is asked why it didn’t act sooner.

Global News notes that YouTube and Roblox also removed accounts tied to the suspect after the shooting. The pattern is consistent: platforms become evidence machines after the fact—preserving logs, accounts, and content for police—while simultaneously being pressured to act as prevention engines before the fact.

Many should recognize the direction of travel. “Safety” incentives push providers toward more retention, more correlation, and tighter identity binding—because the only way to make flagging more actionable is to know more about the person behind the prompts. The result is a familiar bargain: a thinner promise of security in exchange for thicker surveillance, administered not by courts but by trust-and-safety teams and policy thresholds that can’t be cross-examined.

OpenAI’s story is not that it stopped a shooter. It’s that it built a system that can neither reliably prevent violence nor reliably protect due process—and will be blamed for both failures anyway.