Deepfakes flood Iran war feeds
Euronews tracks recycled clips and AI edits across platforms, moderation labels arrive after the viral window closes
Doctored videos and recycled clips are spreading through social media feeds as the Iran war drives a familiar attention economy: fast, emotionally legible content wins, and corrections arrive after the audience has moved on.
Euronews reports that AI-generated deepfakes and miscaptioned footage have circulated widely during the conflict, often repackaged across platforms in formats designed for frictionless sharing. Short vertical video, cropped screen recordings, and image macros travel faster than context because they are easy to repost and hard to verify at a glance. Once a clip is detached from its original source, each subsequent repost becomes a new “original” in the eyes of recommendation systems that reward velocity and engagement rather than provenance.
The distribution chain is not random. A small number of high-following accounts, aggregator pages, and “war update” channels act as hubs, pushing content into the algorithmic mainstream where it is copied into Telegram channels, X timelines, TikTok repost farms, and Instagram Stories. The same incentives that make influencers chase watch-time also make propaganda-style edits rational: the creator who posts first captures reach, while the creator who posts accurately competes against a clock. When platforms add labels or remove posts, the intervention often functions as an additional signal to audiences already primed to see censorship as validation, while the underlying clip continues to circulate via re-uploads and screenshots.
Moderation in this environment becomes a cost-control exercise. Platforms can afford to act decisively on clear policy violations—violent content, explicit incitement—but deepfakes and misleading context are harder to adjudicate at scale, especially across languages and fast-moving events. Fact-checking partnerships and “context” panels can reduce the spread at the margin, but they are structurally late: the peak distribution window for a viral clip is measured in hours, not days. By the time a correction is attached, the most engaged viewers have already seen, shared, and formed an impression.
The second-order effects accumulate. Newsrooms and officials are pulled into a reactive posture, spending time debunking instead of reporting, while audiences learn that “what’s trending” is not the same as “what happened.” Meanwhile, the accounts that reliably generate attention—whether by editing, exaggerating, or fabricating—build follower bases that can be monetised later through ads, subscriptions, or political influence.
In the Iran war feed, the winning format is rarely a document or a map. It is a 15-second clip that looks like evidence.