Canada pressures OpenAI over unreported ChatGPT flag before Tumbler Ridge shootings
Ottawa pushes AI platforms into pre-crime escalation, Due process outsourced to corporate risk teams
Images
globalnews.ca
globalnews.ca
Click to play video: 'Questions over return to learning for Tumbler Ridge students as community grieves'
globalnews.ca
Kanada: OpenAI erwog, Verdächtige Monate vor Amoklauf bei Polizei zu melden
spiegel.de
Canada’s federal government is demanding “answers” from OpenAI after reports that the suspect in the Tumbler Ridge, British Columbia shootings was flagged by ChatGPT months before the attack — raising the question of whether the company should have alerted police.
According to Global News, federal AI minister Evan Solomon said Ottawa is disturbed by reports that “concerning online activity” tied to the suspect was not reported to law enforcement “in a timely matter.” British Columbia premier David Eby called the allegations “profoundly disturbing” and said police are pursuing preservation orders for potential evidence held by digital services firms, including AI companies.
The premise is seductive to politicians: if an AI system “flags” something, it must be a lead; if it’s a lead, someone must be blamed for not escalating it. But the implied policy shift is larger than one tragedy. Governments are attempting to conscript private AI platforms into a pre-crime pipeline — turning customer support and trust-and-safety teams into auxiliary intelligence units.
Global News reports that a B.C. government representative met OpenAI officials on Feb. 11 in a meeting planned weeks earlier about the company potentially opening an office in Canada. The next day, OpenAI requested RCMP contact information; the province says it forwarded the request and connected the company with police. Yet the province also states OpenAI did not inform government officials it had potential evidence related to the shootings.
This is how “voluntary cooperation” metastasizes. First, a company builds internal detection systems to reduce reputational and legal risk. Then a high-profile incident occurs, and elected officials publicly ask why the company didn’t behave like a regulated critical-infrastructure provider. Soon “expectations” become standards, and standards become mandates.
The technical problem is that AI safety flags are not judicial findings. They are probabilistic classifications derived from text, context windows, and policy heuristics — and in practice they are noisy. False positives are inevitable. Once reporting becomes expected, the rational corporate response is over-reporting: forward anything remotely suspicious to law enforcement to avoid political blowback. That doesn’t merely waste investigative resources; it creates a permanent file on people whose “crime” was typing the wrong thing into a chatbot.
Der Spiegel similarly reports that OpenAI considered notifying Canadian police months before the shooting. The operative word is “considered”: the company is being asked to justify discretionary internal judgments as if it were a deputized agency.
Due process is the missing character in this drama. If the state wants preemptive surveillance, it should be forced to do it openly, under law, with warrants and accountability — not by outsourcing it to a private platform whose incentives are to protect its business and whose errors will fall on users. The same governments that warn about “AI harms” are now lobbying to make AI companies responsible for predicting human violence — and to punish them when the crystal ball fails.