Developer Offer
Try ImaginePro API with 50 Free Credits
Build and ship AI-powered visuals with Midjourney, Flux, and more — free credits refresh every month.
How Russian Disinformation Infiltrated Top AI Chatbots
A new report from the Institute of Strategic Dialogue (ISD) reveals a troubling trend: some of the world's most popular AI chatbots are actively promoting Russian state propaganda. The research found that OpenAI’s ChatGPT, Google’s Gemini, DeepSeek, and xAI’s Grok are all serving users information from sanctioned Russian media entities, including state-run news sites and outlets linked to Russian intelligence, particularly when asked about the war in Ukraine.
The Alarming Discovery
Researchers discovered that these AI models are exploiting what are known as data voids—areas where legitimate information is scarce, making them easy targets for manipulation. According to the ISD's findings, almost one-fifth of all responses to questions about the war cited sources attributed to the Russian state. This raises significant concerns, especially within the European Union, where many of these media outlets are officially sanctioned.
Pablo Maristany de las Casas, the lead ISD analyst on the project, highlighted the gravity of the situation, questioning how chatbots should handle references to these sources. With millions of users in the EU turning to AI for real-time information, the ability of these platforms to filter out sanctioned propaganda is being seriously challenged.
How Researchers Uncovered the Propaganda
The ISD team conducted a comprehensive experiment in July, which they confirmed was still reproducible in October. They posed 300 questions to the four chatbots across five languages: English, Spanish, French, German, and Italian. These questions were categorized as neutral, biased, or "malicious" and covered sensitive topics such as NATO's role, peace negotiations, Ukrainian military recruitment, refugees, and alleged war crimes.
The research uncovered a clear pattern of confirmation bias. The more biased or leading the question, the more likely the chatbot was to cite a Russian state-affiliated source. Malicious queries—those designed to confirm an existing opinion—returned pro-Kremlin content 25% of the time. Biased questions did so 18% of the time, while neutral questions still pulled from these sources in over 10% of cases.
The Source of the Disinformation
Since its full-scale invasion of Ukraine in 2022, the EU has sanctioned at least 27 Russian media sources for spreading disinformation. The ISD report identified that chatbots cited several of these, including Sputnik Globe, RT (formerly Russia Today), EADaily, and the Strategic Culture Foundation. Beyond official media, the chatbots also referenced known Russian disinformation networks and influencers who amplify Kremlin narratives. This isn't an isolated finding; previous studies have shown popular chatbots mimicking Russian propaganda.
Responses from Tech Giants and Governments
When confronted with the findings, the companies and entities involved had varied responses.
- OpenAI stated it takes steps to prevent the spread of misleading information from state-backed actors and clarified that the issue stems from the models' search functionality pulling from the live internet, not from the models themselves being manipulated.
- Elon Musk’s xAI, the parent company of Grok, responded simply with: “Legacy Media Lies.”
- Google and DeepSeek did not respond to requests for comment.
- A Russian Embassy spokesperson stated they oppose censorship on political grounds and that restricting Russian media undermines free expression.
- The European Commission noted that it is the responsibility of platform providers to block sanctioned content and that they are in contact with national authorities on the matter.
A Battle for the Information Ecosystem
Experts see this as a calculated move to weaponize the West's information infrastructure. Lukasz Olejnik, a research fellow at King’s College London, called it a "smart move" to target LLMs as they become go-to reference tools. This strategy is amplified by disinformation networks like "Pravda," which have reportedly flooded the internet with content specifically designed to "poison" AI models. McKenzie Sadeghi, a researcher at NewsGuard, explains that these networks excel at filling data voids with false information, making it difficult for automated systems to keep up.
Comparing Chatbot Performance
The study noted differences between the platforms:
- ChatGPT cited the most Russian sources and was the most susceptible to biased queries.
- Grok often linked to social media accounts that amplified Kremlin talking points.
- DeepSeek occasionally returned large volumes of content from Russian state sources.
- Google’s Gemini performed the best overall, frequently displaying safety warnings alongside its results.
The Path Forward: Regulation and Context
As AI chatbots grow in popularity, they are attracting more regulatory scrutiny. ChatGPT may soon reach the user threshold to be designated a Very Large Online Platform (VLOP) in the EU, which would subject it to stricter rules regarding illegal content. However, Maristany de las Casas argues that the solution isn't just about removal. He suggests that platforms need to provide users with better context, explaining why a source might be sanctioned or have a conflict of interest. This approach would empower users to better understand the information they are consuming, especially when it appears alongside trusted sources.
Compare Plans & Pricing
Find the plan that matches your workload and unlock full access to ImaginePro.
| Plan | Price | Highlights |
|---|---|---|
| Standard | $8 / month |
|
| Premium | $20 / month |
|
Need custom terms? Talk to us to tailor credits, rate limits, or deployment options.
View All Pricing Details

