OpenAI details Chinese AI-backed campaign to silence dissidents
Report describes mass reporting and forged court documents across major platforms, Content moderation becomes the choke point
Images
OpenAI says it has disrupted a Chinese “large-scale, resource-intensive and sustained” influence operation that used generative AI to harass and silence critics across major social platforms. According to Business Insider, OpenAI’s latest “Disrupting Malicious Uses of AI” report describes at least hundreds of staff running thousands of fake accounts across “scores of platforms,” including efforts targeting dissidents abroad and even foreign political figures.
The mechanics described are less about sophisticated persuasion than about exploiting the enforcement machinery of private platforms. OpenAI says operatives mass-filed abusive reports to trigger automated bans and content restrictions, and used AI-generated images that mimicked screenshots of conversations or comments to make complaints look credible. In one example cited in the report, operatives forged US county court documents and submitted them to platforms as a pretext for takedowns. The operation also used ChatGPT in a mundane way: an account attributed to the Chinese government uploaded internal status reports and asked the model to polish them.
The episode highlights how “trust and safety” has become a critical interface for political power. Content moderation systems are built to reduce spam, fraud and harassment at scale, but the same bulk-reporting workflows and authenticity rules can be turned into a denial-of-service tool against a person’s ability to speak. Once enforcement is automated, the cheapest move is not to argue with an opponent but to generate enough plausible-looking complaints to make the opponent disappear for “policy” reasons.
Platforms are already responding in the language of infrastructure rather than politics. Bluesky’s head of trust and safety, Aaron Rodericks, told Business Insider that the company has hired specialized investigative staff and expanded monitoring systems, and that it recently removed a “small number of accounts” consistent with the OpenAI report for “inauthentic coordinated activity.” A person familiar with Meta’s work told Business Insider that similar activity is tracked in its regular adversarial reporting and acted on under platform rules.
As AI tools spread, the boundary between “coordinated inauthentic behavior” and organized political campaigning becomes a decision made inside private policy teams, enforced through API access, account scoring and automated queues. The report’s most concrete detail is also its simplest: a government-linked operator used ChatGPT to rewrite internal reports while thousands of fake accounts tried to weaponize the reporting button.
OpenAI says it removed the account and published indicators of the campaign, while the targets were left to appeal moderation decisions one automated strike at a time.