EU states push under-16 social media bans
Age verification becomes de facto internet ID layer, Child-protection rhetoric builds KYC market and normalizes traceability
Images
euobserver.com
And EU survey found over 90 percent of people surveyed want public authorities to help protect kids online (Source: Tech insider)
Source: Tech insider
Spanish prime minister Pedro Sánchez announced multiple measures to further regulate online spaces at the World Governments Summit in Dubai in February (Source: La Moncloa)
Source: La Moncloa
European governments are converging on a simple slogan—“protect children online”—and arriving at a policy that looks less like parenting and more like an identity regime.
EUobserver reports that Spain, France, Denmark, Portugal, Germany, Greece, Austria and others are considering bans or strict limits on social media for under-16s, following Australia’s December rollout of a ban that reportedly led to the deletion or restriction of 4.7 million accounts. Spanish prime minister Pedro Sánchez described social platforms as a “digital Wild West” of “addiction, abuse, pornography, manipulation, and violence.” The public mood is on his side: a 2025 Eurobarometer survey found more than 90 percent of respondents want public authorities to help protect children online.
The mechanism that makes these bans “work” is the part politicians rarely headline: age verification. At scale, age gates are not a checkbox; they are infrastructure. They require either (a) persistent identity checks (KYC-style), (b) third-party identity providers, or (c) device- and biometrics-based estimation—all of which expand data collection, create new breach targets, and normalize traceability.
Digital rights groups argue the policy is a diversion. Simeon de Brouwer of EDRi told EUobserver that age verification is a “band-aid” that pulls resources away from addressing the underlying incentive problem: platforms optimize for engagement because that’s how they get paid. Even supporters of intervention concede enforcement is lagging. Bruegel researcher Paul Richter criticized the European Commission for not fully using the Digital Services Act (DSA) tools already on the books, which include obligations to mitigate systemic risks to minors.
But the political economy points in one direction: identity requirements are legible to regulators and enforceable against platforms. “Fix the feed” is harder. “Show me your papers” scales beautifully.
This is where the censorship-by-API problem emerges. Once a platform must verify age, it must also decide what counts as “verified,” which vendors are acceptable, what data is retained, and what happens when verification fails. That creates a compliance market—identity brokers, verification SDKs, facial-age estimation services—whose business model depends on making the internet less anonymous and more permissioned.
And it invites mission creep. If a system can verify age for minors, it can be extended to adults “for safety,” “for elections,” “for misinformation,” or simply to satisfy future liability theories. In practice, age verification is a backdoor to real-name policy—outsourced, privatized, and wrapped in child-protection rhetoric.
Europe’s online speech regime has long relied on platform enforcement under vague duties of care. Adding mandatory identity checks would harden that model into something closer to an access-control layer for public discourse. The stated goal is to keep teenagers from doomscrolling. The likely outcome is that everyone gets a little more trackable, and the firms selling the tracking get a new regulated moat.