Russias AI Fueled Disinformation Campaign Exposed
The Rise of Operation Overload
A pro-Russia disinformation campaign is using consumer artificial intelligence tools to generate a “content explosion,” according to new research. The campaign focuses on inflaming tensions around sensitive topics like global elections, the war in Ukraine, and immigration.
Known by several names including Operation Overload and Matryoshka, the operation has been active since 2023. Multiple organizations, including Microsoft and the Institute for Strategic Dialogue, have linked it to the Russian government. By impersonating media outlets, the campaign aims to sow division in democratic nations, with a primary focus on Ukraine and a secondary focus on audiences in the US.
AI as a Propaganda Multiplier
Researchers from Reset Tech and Check First have documented a dramatic increase in the volume of content produced by the campaign. Between July 2023 and June 2024, they identified 230 unique pieces of content. However, in just the last eight months, Operation Overload generated 587 unique items, the majority of which were created using AI.
This surge is attributed to the accessibility of free, consumer-grade AI tools. These tools enable a tactic called “content amalgamation,” allowing operatives to quickly produce multiple forms of content—from images and videos to QR codes—all pushing the same false narrative.
“This marks a shift toward more scalable, multilingual, and increasingly sophisticated propaganda tactics,” the researchers wrote. Aleksandra Atanasova, lead researcher at Reset Tech, expressed surprise at the diverse and layered approach. “It's like they have diversified their palette to catch as many different angles of those stories,” she noted, adding that the campaign appears to use publicly available AI generators rather than custom-built tools.
Unmasking the AI Tools of Choice
While identifying every tool is difficult, researchers pinpointed one specific text-to-image generator: Flux AI, developed by German-based Black Forest Labs. Using image analysis, they found a 99 percent likelihood that fake images of riots in Berlin and Paris, allegedly showing Muslim migrants, were created with Flux AI.
Researchers were able to replicate the images using discriminatory prompts like “angry Muslim men,” highlighting how these models can be abused to promote racism and stereotypes. A spokesperson for Black Forest Labs stated the company builds in safeguards and supports collaboration between developers and platforms to prevent misuse. However, Atanasova confirmed that the images they analyzed contained no identifying metadata.
From Fake Images to Cloned Voices
Operation Overload also heavily utilizes AI voice-cloning technology. The number of videos produced by the campaign more than doubled in the last eight months, with the majority using AI to deceive viewers.
In one notable example from February, a video appeared to show Isabelle Bourdon, a French academic, encouraging Germans to riot and vote for the far-right AfD party. The original footage was from a university video where she discussed an academic prize, but AI was used to replace her voice with a fake script about German politics.
Spreading Disinformation Across Platforms
The campaign disseminates its AI-generated content through a network of over 600 Telegram channels and bot accounts on social media platforms like X and Bluesky. Recently, the operation expanded to TikTok for the first time, where just 13 accounts managed to rack up 3 million views before being demoted. A TikTok spokesperson confirmed the accounts were removed and stated the platform is vigilant against such manipulation.
While researchers noted that Bluesky had suspended 65 percent of the reported fake accounts, they pointed out that “X has taken minimal action despite numerous reports on the operation.”
A Counterintuitive Amplification Strategy
In a bizarre twist, Operation Overload actively contacts media outlets and fact-checking organizations. Since September 2024, the campaign has sent up to 170,000 emails to over 240 recipients, providing links to their fake content and asking journalists to investigate its authenticity. This counterintuitive strategy appears to be an attempt to gain mainstream exposure, as operatives believe getting their content featured by a real news outlet—even in a debunking article—is a victory.
The Broader Threat of AI in Disinformation
This campaign is part of a larger trend. Pro-Russia groups like CopyCop have previously used AI language models to create fake news websites. While these sites often have low traffic, their content can sometimes rise to the top of Google search results.
A recent report estimated that Russian disinformation networks are producing at least 3 million AI-generated articles annually, poisoning the data used by popular AI chatbots. As AI tools become more powerful, experts predict this surge in AI-fueled disinformation will only continue. “They already have the recipe that works,” Atanasova concluded. “They know what they're doing.”