Back to all posts

Russian AI Grooming The Truth Behind The Panic

2025-07-08Maxim Alyukov, Mykola Makhortykh, Alexandr Voronovici, Maryna Sydorova5 minutes read
AI Disinformation
Media Analysis
Russia

The Spark of Panic A Startling Report

In March, a report from the misinformation tracking company NewsGuard sparked widespread concern. The company published an analysis claiming that generative AI tools like ChatGPT were actively amplifying Russian disinformation. After testing leading chatbots with prompts derived from the "Pravda network"—a collection of pro-Kremlin websites masquerading as real news outlets—the results seemed dire. NewsGuard reported that chatbots repeated these false narratives in 33 percent of cases.

The Pravda network itself has been a subject of debate among researchers. With a relatively small audience, its purpose was unclear. Some theorized it was a performative act to signal Russia's influence. Others suspected a more sinister goal: to deliberately "groom" the large language models (LLMs) that power chatbots, poisoning them with falsehoods that would later be served to unsuspecting users.

NewsGuard's report appeared to confirm this second, more alarming theory. The claim quickly gained momentum, leading to dramatic headlines in major global publications, including The Washington Post, Forbes, and Der Spiegel.

Scrutinizing the Evidence Is the Methodology Flawed

Despite the media frenzy, many researchers found the conclusion unconvincing. A major issue was the study's opaque methodology. NewsGuard did not release the specific prompts used in its tests and declined to share them with journalists, which makes independent verification and replication of the findings impossible.

Furthermore, the study's design likely skewed the results, making the 33 percent figure potentially misleading. The chatbots were tested exclusively on prompts related to the Pravda network, rather than the wide range of topics users typically inquire about. Two-thirds of these prompts were intentionally designed to elicit false narratives or present them as factual. Even when a chatbot urged caution, noting that a claim was unverified, this response was counted as spreading disinformation. In essence, the study was designed to find disinformation, and it succeeded.

This situation highlights a troubling trend where fast-advancing technology, media hype, and genuine concerns about bad actors create a distorted view of the problem. While the World Economic Forum ranks disinformation as a top global risk, knee-jerk reactions can obscure the true nature of the challenge and oversimplify the complexities of AI.

A Deeper Dive A New Audit Challenges the Narrative

Is it possible for chatbots to repeat Kremlin talking points? Absolutely. However, the frequency of such events and whether they are a result of deliberate manipulation remain open questions. To investigate further, we conducted our own systematic audit of leading AI models, including ChatGPT, Copilot, Gemini, and Grok, using a range of disinformation-related prompts.

Our tests included the few examples NewsGuard provided, as well as new prompts we designed ourselves. These ranged from general claims, like those about US-funded biolabs in Ukraine, to hyper-specific allegations concerning NATO facilities in particular Ukrainian towns. If the Pravda network were truly "grooming" AI, we would expect to see its narratives appear consistently across these queries.

Our findings starkly contrasted with NewsGuard's. Instead of a 33 percent rate, our prompts generated demonstrably false claims only 5 percent of the time. References to Pravda-linked websites appeared in just 8 percent of the outputs, and in most of these instances, the chatbot cited the source specifically to debunk its claims.

The Data Void Hypothesis A More Plausible Explanation

Our research pointed to a different cause. References to the Pravda network were almost exclusively found in responses to queries about topics with scant coverage from mainstream, credible media outlets. This supports the "data void" hypothesis: when an AI chatbot lacks sufficient reliable information on a subject, it may turn to dubious sources simply because they are the only ones available. This isn't evidence of a sophisticated grooming campaign but rather a consequence of information scarcity.

For a user to be exposed to this type of disinformation, a rare set of conditions must be met. They would need to ask about an obscure topic using specific language, that topic would have to be ignored by credible sources, and the chatbot's internal safety measures would have to fail to deprioritize the unreliable source. Such scenarios are highly infrequent outside of artificial tests designed to trick the AI.

The Dangers of Disinformation Panic

The narrative of a cunning Kremlin plot to manipulate Western AI is compelling, but overhyping the threat carries its own risks. Some experts believe that Russia's disinformation campaigns are designed precisely to trigger Western fears, overwhelming fact-checkers and creating a sense of chaos. Russian propagandists like Margarita Simonyan often use Western research to boast about the supposed influence of state-funded media.

Constant, indiscriminate warnings about disinformation can also backfire. They risk eroding public trust in democratic institutions, encouraging cynicism, and even leading people to dismiss credible information as false. Meanwhile, this focus on a highly visible but perhaps exaggerated threat can divert attention from quieter, more dangerous uses of AI by malicious actors, such as generating malware.

Separating Fear from Fact

It is crucial to distinguish between genuine concerns and inflated fears. While disinformation is a serious challenge, the panic it can provoke is also a problem. A clear-eyed, evidence-based approach is necessary to understand the real risks without falling for alarmist narratives.

The views expressed in this article are the authors’ own and do not necessarily reflect Al Jazeera’s editorial stance.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.