AI-generated campaign ads spread in US races
synthetic voices and images cut production costs faster than disclosure rules evolve, campaigns set their own definition of obvious fakery
Images
State Representative James Talarico standing next to Texas' flag
nbcnews.com
nbcnews.com
nbcnews.com
At least 15 campaign ads using AI-generated content have run in the United States since November, a small number that is already large enough to force campaigns, regulators and platforms to decide what counts as “real” in paid political messaging. NBC News reports that AI is showing up across races from school boards to governor’s contests, used for everything from cartoonish attack imagery to synthetic audio that mimics a rival’s voice.
One of the most explicit examples cited by NBC comes from Massachusetts, where Republican gubernatorial primary candidate Brian Shortsleeve ran an AI-generated radio ad that sounds like Democratic Gov. Maura Healey saying things she did not say. The ad did not carry an explicit AI disclaimer; instead it framed the clip as what Healey’s ads would sound like “if she was honest.” Shortsleeve’s campaign told NBC its policy is to disclose AI use only when a depiction is “not obvious to a reasonable viewer,” a standard that effectively lets the campaign set its own threshold.
National committees are now using the same playbook. NBC notes that the National Republican Senatorial Committee released an AI-generated video of Texas Democratic Senate nominee James Talarico reading real tweets on race and transgender rights. At the city level, former New York governor Andrew Cuomo’s mayoral effort used AI in ads, including one depicting criminals supporting incumbent mayor Zohran Mamdani. In Texas, Rep. Jasmine Crockett’s Senate campaign drew scrutiny for its own use of AI, while Republicans also used her likeness in AI-generated attacks.
The attraction is not subtle: political advertising is expensive, and AI lowers both production costs and turnaround time. NBC cites industry estimates that traditional ad production can run from around $1,000 to far more depending on casting, postproduction and distribution. AI tools can replace stock footage searches, reshoots and even voiceovers with prompts, allowing smaller campaigns — and cost-conscious large ones — to iterate quickly.
That speed collides with the slower, labour-heavy parts of the information ecosystem. Fact-checking, legal review and media scrutiny do not scale the way synthetic content does, particularly when the material is crafted to stay just inside the boundary of “obviously fake” rather than clearly labeled. Mark Jablonowski, CEO of the progressive ad firm DSPolitical, told NBC that the core problem is not AI as such but the use of generative tools to create “something that never existed” in order to deceive.
The regulatory picture remains patchy. NBC describes a landscape in which campaigns test what they can get away with, while disclosure norms depend on voluntary choices by the very actors who benefit from ambiguity. Platforms, meanwhile, sell reach and targeting regardless of whether the creative is filmed, illustrated or synthesized.
AI political ads are still “trickling” in, NBC writes, but the examples already span multiple states and levels of government. The technology is arriving first where the budgets are tight and the incentives to cut corners are strongest.