AI child abuse reports surge in US tip line data
NBC says over one million generative AI reports in 2025, Platforms respond by wiring chat logs closer to police
Images
AI has added a confounding element to child sexual abuse cases for law enforcement.
nbcnews.com
OpenAI promises Canada tighter safety protocols after ChatGPT flagged a shooter's violent chats but never called police
the-decoder.com
The US National Center for Missing and Exploited Children says its CyberTipline received more than one million reports tied to generative AI between January and September 2025, according to NBC News. Investigators say the material ranges from AI-altered photos of real children to fully synthetic images, and that the output is becoming harder to distinguish from “traditional” child sexual abuse material.
The reporting describes a familiar pattern: enforcement capacity grows slowly while the supply of content scales instantly. Homeland Security Investigations’ Cyber Crimes Center told NBC News that in the first six months of 2025, reports of child exploitation involving generative AI rose more than 600% compared with 2023 and 2024 combined. Prosecutors, meanwhile, are left sorting edge cases—images without an identifiable victim, or datasets where AI-made material sits alongside real abuse—while the volume of tips far exceeds the number of investigations that can be opened.
That pressure is already reshaping how AI firms handle private communications. The Decoder reports that OpenAI, in a letter to Canada’s AI minister Evan Solomon, promised “tighter safety protocols” after a fatal school shooting in British Columbia. The suspect’s chats with ChatGPT were flagged internally as potential real-world violence; OpenAI reviewed the account, blocked it, but did not contact police. Under the company’s proposed changes, it would broaden the criteria for sharing account data with authorities, set up direct communication channels with Canadian law enforcement, and improve detection of evasion tactics.
The two stories sit on the same fault line: platforms are being asked to function as both publisher and informant. The more AI tools are linked—by politicians, regulators, and the companies themselves—to harms like exploitation or violence, the more “safety” becomes an operational pipeline: automated flagging, human review, account action, and onward reporting. Once that pipeline exists at scale, it becomes difficult for competitors to opt out, because the reputational and legal downside of being the one firm that did not escalate is larger than the downside of false positives.
NBC News notes that offenders are exploiting “open-source AI models and ready-made sexual exploitation platforms,” a reminder that the most restrictive rules tend to bind the most visible companies. But the compliance burden still falls on the mainstream services that host everyday conversations, because those are the systems that can be subpoenaed, regulated, and publicly shamed.
OpenAI’s proposed fix is not a new model capability but a new relationship: a faster handoff from chat logs to police. The company is now building the mechanism it previously chose not to use.