AI chatbots quote journalism at scale
Muckrack finds one in four quotes come from news outlets, Reuters content circulates while platforms capture the interface
Images
One in four quotes in AI chatbot responses comes from journalism, Muckrack study finds
the-decoder.com
Image description
the-decoder.com
Fifteen million AI-generated quotes across four major chatbots were traced back to their sources, and roughly one in four came from journalism, according to a Muckrack analysis cited by Press Gazette and summarised by The Decoder. Reuters was the most frequently cited outlet globally, followed by Forbes, the Guardian, the Financial Times and CNBC. The same dataset puts former Business Insider editor Henry Blodget at the top of individual bylines.
The numbers matter because they describe a supply chain most readers never see. A user asks ChatGPT, Gemini, Claude or Perplexity a question; the model answers in a confident voice; and a quarter of the “quotation” layer—names, claims, attributed facts—still originates in traditional reporting. That creates a strange split in who bears the cost. Newsrooms pay for reporters, editors, lawyers and travel. The chatbot interface captures the user relationship, the brand loyalty and, increasingly, the subscription revenue.
Muckrack’s response is to sell “AI visibility” scores to journalists and publishers—essentially a new kind of audience metric. It is an attempt to turn a leak in the value chain into something measurable and billable. But it also points to a more basic problem: the platforms that intermediate information can change the rules without negotiating with the people who produce it. Publishers can optimise for search, then watch search replace itself with AI summaries.
In parallel, the same generative tools are being used for industrial-scale abuse. A separate AI Forensics study of 2.8 million Telegram messages in Italian and Spanish groups describes a market for “nudifying bots” that turn ordinary photos into synthetic nude images. The report says archives of non-consensual intimate imagery are sold for 20 to 50 euros, with some bot promoters claiming affiliate commissions of up to 40%. Links shared inside these groups often point to AI “girlfriend” generators and nudification services, and the content reportedly spreads between language communities.
Telegram’s design features are part of the machinery: bots for access control, channel folders for organising feeds, and the ability for groups to reappear quickly after takedowns, according to the report. The researchers argue Telegram should be treated as a “Very Large Online Platform” under the EU’s Digital Services Act, which would trigger stricter obligations around systemic risks.
Taken together, the two studies describe the same pattern in different directions: AI systems absorb high-cost reporting without paying for it, and automate low-cost harassment into a paid product. In both cases, the liability and cleanup costs land on someone else—newsrooms trying to fund reporting, victims trying to get images removed—while the intermediaries keep the traffic.
Muckrack’s ranking still begins with a familiar name: Reuters, a wire service built for syndication, now feeding machines that do not negotiate syndication contracts.