Politics

Swedish police push AI and biometrics for CSAM investigations

Isöb cites 25,000 annual cases and 150 investigators, child-protection framing risks building reusable mass-scanning infrastructure

Images

aftonbladet.se
aftonbladet.se
”Barnen som utsätts för övergrepp behöver skydd och stöd. Även förövarna behöver stöd – de går inte till vårdcentralen för att söka hjälp för att sluta vara sexuellt intresserade av barn”, säger Louise Åhlén. Foto: Björn Lindahl ”Barnen som utsätts för övergrepp behöver skydd och stöd. Även förövarna behöver stöd – de går inte till vårdcentralen för att söka hjälp för att sluta vara sexuellt intresserade av barn”, säger Louise Åhlén. Foto: Björn Lindahl Björn Lindahl
Polisen vill använda AI för att rädda barn Polisen vill använda AI för att rädda barn aftonbladet.se
Förutom fler utredare som utreder internetrelaterade sexuella övergrepp mot barn behövs fler barnförhörsledare, internetspanare, analytiker och AI-utvecklare, menar Louise Åhlén. Foto: Björn Lindahl Förutom fler utredare som utreder internetrelaterade sexuella övergrepp mot barn behövs fler barnförhörsledare, internetspanare, analytiker och AI-utvecklare, menar Louise Åhlén. Foto: Björn Lindahl Björn Lindahl

Swedish police say they want to use AI to “save children” from online sexual abuse. The practical outcome, if history is any guide, is a generalized scanning and profiling infrastructure—built in the name of the worst crime imaginable, then quietly repurposed for everything else.

According to SVT and Aftonbladet’s reporting on the national police group Isöb (internet-related sexual abuse against children), about 150 officers nationwide handle roughly 25,000 cases per year spanning grooming, image-sharing among minors, forwarding of illegal material, and systematic abuse. Louise Åhlén, described as a police operations developer, says the unit receives around 20 alerts per day and argues that technology must scale: AI, biometric recognition, and automated filtering of seized phones to flag abuse material.

The argument is emotionally airtight—children are being harmed, and investigators are overwhelmed. The technical and legal questions are less convenient.

Start with data sources and scope. Åhlén points to tools that could search seized phones and, in some cases, use biometrics and registers such as passport databases to identify victims faster. That implies (1) large-scale automated content classification; (2) face or pattern matching across government-held databases; and (3) workflow automation that decides what deserves human attention.

Then comes error. Any model deployed at scale produces false positives and false negatives. In CSAM investigations, false positives are not an inconvenience; they are a life-changing accusation, device seizure, and reputational damage—often long before any court has weighed evidence. Police already seize devices and run forensic pipelines; adding AI triage raises questions about validation, auditability, and the chain of custody: what exactly did the model “see,” how was it trained, what thresholds were used, and can a defendant challenge it meaningfully?

Next is legal basis and mission creep. Sweden already has powerful tools—secret data interception, expanded wiretapping, and broad seizure powers—justified by organized crime and terrorism. Building an AI layer for CSAM detection creates a ready-made architecture for scanning communications or devices for other categories: drugs, extremism, “hate,” or whatever becomes the next political priority. Europe’s recurring “chat control” proposals show how quickly child-protection rhetoric becomes a pretext for mass monitoring.

Finally, governance. Åhlén says “the legal support exists” but “someone has to build” the functions. That should alarm anyone who still thinks surveillance is primarily constrained by law rather than by budgets, vendor roadmaps, and institutional appetite.

If Sweden wants more child protection, it can fund more investigators, improve victim support, and prioritize cases. If it wants an automated suspicion machine, AI will happily oblige—and it will not stop at the browser history of the guilty.