A Chinese law enforcement official accidentally exposed a global intimidation campaign by using ChatGPT as a personal diary to document operations targeting dissidents and political critics worldwide.
OpenAI's investigation revealed hundreds of operators and thousands of fake accounts working across social media platforms to silence opposition through psychological pressure, forged documents, and coordinated smear campaigns. The user treated ChatGPT like a journal for tracking what they called "cyber special operations," according to OpenAI's latest threat report.
These activities included impersonating US immigration officials to warn Chinese dissidents about supposed legal violations, creating fake obituaries and gravestone photos to spread death rumors, and filing thousands of false reports against activists' social media accounts.
In one documented case from October 2025, the same user attempted to use ChatGPT to plan a smear campaign against Sanae Takaichi, Japan's first female prime minister, after she criticized China's human rights record in Inner Mongolia. The proposed operation involved posting negative comments on social media, using fake email accounts posing as foreign residents to send complaints about Takaichi to other Japanese politicians, and amplifying hashtags attacking her policies.
ChatGPT refused to assist with the political smear campaign, but evidence suggests other AI models were used instead. Hashtags matching the user's described strategy appeared on Japanese online communities in late October 2025, though they gained minimal traction with single-digit views on YouTube videos and zero engagements on most posts.
"This is what Chinese modern transnational repression looks like," Ben Nimmo, principal investigator at OpenAI, told reporters ahead of the report's release. "It's not just digital. It's not just about trolling. It's about trying to hit critics of the CCP [Chinese Communist Party] with everything, everywhere, all at once."
The operation extended beyond social media manipulation into direct psychological warfare. Tactics included targeting dissidents' mental health and families, hacking their livestreams, and creating fake evidence for platform violations.
One report detailed efforts to get activist Hui Bo (@huikezhen) removed from X by filing thousands of reports against his posts and creating dozens of fake accounts using his likeness.
OpenAI investigators matched descriptions from the ChatGPT user with real-world online activity. False rumors about dissident Jie Lijian's death that surfaced in 2023 corresponded exactly with the user's description of creating phony obituaries and gravestone photos for mass posting.
The influence operation involved hundreds of Chinese operators working through thousands of fake online accounts across multiple social media platforms. OpenAI banned the user after discovering the activity but noted this is only one visible component of broader state-backed information operations.
This incident follows earlier OpenAI actions against suspected Chinese government-linked accounts. In October 2025, the company banned users asking ChatGPT to design social media monitoring tools for scanning platforms like X, Facebook, Instagram, Reddit, TikTok, and YouTube for extremist speech and political content.
"Cases like these are limited snapshots, but they do give us important insights into how authoritarian regimes might abuse future AI capabilities," Nimmo said in October regarding earlier bans of Chinese-linked accounts attempting surveillance tool development.
The report emerges amid intensifying US-China competition over artificial intelligence technology development and deployment standards.















