Media

Child experts urge Google to curb AI slop on YouTube

Open letter targets recommendations and YouTube Kids, age-gating pressure grows as platforms externalise risk

Images

Advocates argue that YouTube’s current disclosure system is ineffective, as young children are unable to read or comprehend the ‘altered and synthetic’ warnings tucked away in video descriptions (Copyright 2021 The Associated Press. All rights reserved.) Advocates argue that YouTube’s current disclosure system is ineffective, as young children are unable to read or comprehend the ‘altered and synthetic’ warnings tucked away in video descriptions (Copyright 2021 The Associated Press. All rights reserved.) independent.co.uk
The group has demanded that Google ‘halt all investment in the creation of AI-generated videos for children’ (AFP via Getty Images) The group has demanded that Google ‘halt all investment in the creation of AI-generated videos for children’ (AFP via Getty Images) AFP via Getty Images

More than 200 child-development experts and advocacy groups have urged Google to stop recommending AI-generated “slop” to children on YouTube, according to The Independent. In an open letter to Alphabet CEO Sundar Pichai and YouTube CEO Neal Mohan, the coalition asked for a ban on synthetic media recommendations for users under 18 and for YouTube Kids to stop hosting such content.

The complaint is not that generative video exists, but that YouTube’s recommendation system is doing distribution work at scale for material that is cheap to produce and hard for parents to audit. The letter describes “plotless, mesmerizing” AI videos engineered for retention, and argues that disclosures like “altered and synthetic” are meaningless to pre-literate viewers. It cites research suggesting that after children watch popular preschool content, a large share of subsequent recommendations can include AI-generated material, and points to investigations that have found abusive or disturbing videos slipping under child-friendly tags.

YouTube’s response, via spokesperson Boot Bullwinkle, is that the Kids app maintains “high standards” and limits AI content to a “small set of high-quality channels,” while the main platform relies on labeling and creator disclosure. That answer effectively concedes the core problem: the same recommendation machinery that turns any format into a growth product also turns moderation into an after-the-fact clean-up operation. When the marginal cost of producing another video approaches zero, the bottleneck becomes distribution—and the platform owns it.

The next step is already visible in parallel policy debates: when platforms cannot credibly promise that children will not be served junk, lawmakers move toward age-gating and identity checks. Canada is already flirting with bans for under-16s (and therefore ID verification), and similar arguments are circulating across Europe. Once age becomes a compliance variable, the infrastructure tends to spread: the same verification rails used for “kids’ safety” can be reused for content controls, ad targeting rules, and enforcement of platform liability regimes.

The coalition’s demands also underline how the industry’s own economics invites regulation. AI slop is a predictable output of ad-funded feeds: volume drives watch time, watch time drives inventory, and inventory drives revenue. The “safety” layer then becomes a product in its own right—policy teams, trust-and-safety vendors, labeling schemes, and verification tools—each adding cost while pushing responsibility outward to creators and parents.

YouTube can keep saying AI content is a “small set” on YouTube Kids, but the letter’s central claim is about recommendations across the wider platform. The dispute is really over who controls the feed—and who is left holding the bill when an automated distribution system points children at the wrong videos.