OpenAI weighs tipping RCMP on ChatGPT user before BC school massacre
Company defines imminent credible harm threshold, Private safety policy drifts toward pre-crime without due process
Images
ChatGPT-maker OpenAI considered alerting Canadian police about school shooting suspect months ago
independent.co.uk
OpenAI says it flagged a Canadian user for “furtherance of violent activities” in June 2025, banned the account, and briefly considered tipping off the Royal Canadian Mounted Police — then decided the activity did not meet its internal threshold for referral. Months later, that same person, 18-year-old Jesse Van Rootselaar, carried out a mass killing in Tumbler Ridge, British Columbia, leaving eight dead before dying by suicide, according to the RCMP. The Independent reports that victims included a teaching assistant and five students aged 12–13, and that the suspect had prior mental-health-related contacts with police.
The episode is interesting less as a hindsight morality play than as a governance preview: a private AI vendor is quietly designing a de facto “duty to report” regime, complete with its own evidentiary standards, risk definitions, escalation pathways, and documentation — but without the constitutional constraints, transparency obligations, or adversarial process that (in theory) restrain state power.
According to The Wall Street Journal, as summarized by The Independent, OpenAI’s threshold for law-enforcement referral is whether there is an “imminent and credible risk of serious physical harm to others.” In this case, OpenAI says it did not identify “credible or imminent planning,” so it did not refer the matter to police at the time. After the shooting, OpenAI says employees proactively contacted the RCMP with information about the individual’s use of ChatGPT.
That single sentence — “imminent and credible risk” — is doing enormous institutional work. It implies ongoing monitoring (“abuse detection efforts”), a classification pipeline (“furtherance of violent activities”), and a decision gate (refer vs. don’t refer). Yet OpenAI has not publicly detailed what signals trigger escalation: specific prompts, iterative planning behavior, attempts to acquire weapons, location hints, or cross-account correlation. Nor has it explained how it balances false positives (ruining innocent lives by misclassification) against false negatives (missing real threats) — the classic “pre-crime” tradeoff, except implemented by a platform whose core business model depends on collecting and processing user inputs at scale.
The worry is not that OpenAI might sometimes call police; it’s that “sometimes” becomes normal, then expected, then demanded by regulators — and finally used as a liability shield. Once a company has a referral process, it will be judged not only on whether it exists, but on whether it was used. That creates incentives to over-report, to log more, to retain more, and to build internal compliance bureaucracies that look suspiciously like intelligence units.
Meanwhile, law enforcement gets the upside of private-sector surveillance without the paperwork: a tip that arrives pre-packaged as “credible” and “imminent,” generated by proprietary models and undisclosed heuristics. If this is the future, the first question is not “why didn’t they report?” but “who authorized them to decide?”