Back to all posts

AI Persuasion Unveiled New Studies Explore Influence

2025-06-26Natasha Goel8 minutes read
AI
Persuasion
Research

The Stubborn Nature of Beliefs

It's a well-known concept in political communications that simply presenting facts rarely changes people's minds and can sometimes even increase polarization. This phenomenon is often attributed to motivated reasoning—our tendency to hold onto beliefs tied to our social identities, emotions, and worldviews, which can override contradictory evidence as a protective measure. Consequently, when faced with accurate information that clashes with their views, individuals don't always reconsider; sometimes, they become even more entrenched.

AI Enters the Persuasion Arena

Against this challenging backdrop, a new wave of research is examining the persuasive capabilities of generative artificial intelligence platforms like ChatGPT. Some of these studies have sparked significant ethical debates. This surge in research, including my own recent study (currently available as a preprint on the Open Science Framework), brings several key questions to the forefront: Firstly, how will public trust in AI evolve, and how will this affect its ability to persuade? Secondly, what kinds of public opinion can AI genuinely influence, and under what circumstances? Lastly, can these AI tools effectively reach individuals who are most resistant to persuasion?

The Challenge of Persuasion in a Polarized World

The idea that evidence-based persuasion might be ineffective has shaped both academic study and public discussion. While some studies offer counter-evidence, the success of persuasion heavily depends on context. A major hurdle is the high level of societal polarization, which fuels distrust not only of partisan sources but also of various elite figures, including experts. This poses a critical question: in our current divided political climate, can alternatives like AI overcome audience skepticism towards information contradicting their beliefs and persuade with evidence where human sources often fail?

Investigating AI's Persuasive Power

Our research focused on two primary questions. First, can conversations with ChatGPT lessen the confidence in false beliefs held by individuals from different political affiliations? Second, if it can, is ChatGPT persuasive because people perceive it as an exceptionally trustworthy source? This second question delves into source credibility—the principle that who delivers a message can be as crucial as the message content. People are generally more receptive to new information if they view the source as knowledgeable and unbiased. We aimed to discover if AI's persuasive ability is due to perceptions of its superior knowledge or neutrality compared to more politicized human communicators like politicians, pundits, or anonymous online users.

All participants engaged in a five-round or five-minute conversation with ChatGPT-4o. However, the study manipulated their perception of who they were interacting with: a human expert, ChatGPT, or a layperson. The objective was to ascertain if AI could indeed be an effective persuader and whether this effectiveness was rooted in the perceived intelligence or objectivity of the technology.

Surprising Results on AI Persuasion

The findings were unambiguous: Conversations with ChatGPT were persuasive, but this persuasiveness didn't seem to rely on unique source credibility. Participants across all groups showed reduced certainty in false or unsupported beliefs. This held true for all Democratic-aligned beliefs and most Republican-aligned ones, with exceptions for beliefs concerning climate change causes and COVID-19 vaccine safety. Significantly, the change wasn't just about reduced confidence—many participants altered their views entirely. Remarkably, 29% reversed their stance, shifting from an inaccurate belief to a more accurate one. For example, a participant who initially believed there was widespread voter fraud in the 2020 presidential election later selected that the election was won without widespread fraud after conversing with ChatGPT.

This prompts the question of why, and the answer is critical. Is ChatGPT generating superior content? Is it the personalization? The interactive nature? Or do people simply trust it more? If AI’s persuasive strength comes mainly from the quality of its messages, it offers a significant opportunity to disseminate persuasive, evidence-based information on a large scale, potentially enhancing public understanding and deliberation. However, it also implies that the same mechanisms could be exploited to spread false or harmful content. If the messages themselves are persuasive, almost anyone could generate influence cheaply. Conversely, if persuasion hinges on perceived objectivity or credibility, AI's effectiveness might be fragile as public attitudes evolve. Our study examined this latter possibility and found little indication that AI's identity as a source drove persuasion.

While we were genuinely struck by ChatGPT’s ability to shift even deeply held beliefs, the practical application of AI as a persuasive tool requires more thorough examination. Considering those in our lives who cling fiercely to false beliefs, it seems overly optimistic to expect a single chatbot interaction, or even a single AI-generated text block, to change their minds. So, where does this guide future research? While increasing evidence suggests both human and AI-generated information can alter opinions, researchers studying AI-driven persuasion should now move beyond merely asking if AI can persuade. Instead, they should focus on understanding how, when, and at what cost it does so. Crucially, this involves designing studies that are rigorous and ethically sound, as misusing these tools in research carries a high price.

How will trust in AI play out over time?

In our study, labeling the source as AI didn’t seem to diminish its persuasive capacity, nor did it enhance it. Believing the AI was a human expert led to greater belief change, but believing it was ChatGPT did not. This indicates that the strength of the content itself might be more significant than the perceived identity of the deliverer.

If the primary motivation for using AI in fact-based interventions is to overcome the limitations of human messengers—especially in highly polarized environments—these results don't strongly support doing so. There's some evidence that people find AI-generated messages more persuasive than human ones when the source is unknown. However, once they know the source is AI, they exhibit some aversion. Our findings align with this; people still showed a preference for expert human messaging.

For the time being, this might actually benefit AI. Its messages can remain persuasive even when the source is known. But if public trust in AI diminishes, either generally or within specific demographics, the impact of these messages could wane, similar to what has happened with expert sources in polarized settings. This could also incentivize attributing human labels to AI-generated content to preserve influence. In this respect, AI’s persuasive power—whether through its messages or its identity—is not static; it’s a dynamic factor that researchers and policymakers must monitor closely.

What is the scope of its persuasive power?

Many existing studies are conducted in environments free from common real-world elements like incivility, partisan hostility, group threats, and mis/disinformation. Yet, these factors define the information landscapes where beliefs are maintained. Some evidence suggests that persuasion is more challenging in conflictual settings. Therefore, further research is necessary to determine if AI’s seemingly robust persuasive ability can indeed withstand our real-world information environments.

Furthermore, while our study concentrated on beliefs addressable with factual corrections, it's uncertain whether AI can shift deeper attitudes—particularly those rooted in identity, values, or worldview. For instance, concerning immigration, there's a significant difference between correcting a belief like “Immigrants commit more crimes than U.S. citizens” and altering an attitude such as “Overall, there is too much immigration to the United States.” The former is a factual claim that can be directly addressed with evidence, as we did. This required relatively straightforward AI training, emphasizing evidence-based responses based on the respondent's reasoning, resulting in kind but fact-focused conversations. The latter claim, however, might reflect concerns about fairness, scarcity, national identity, or culture. These are not merely factual disagreements, and attempts to shift them may be far more complex and demand careful consideration in prompt conceptualization.

Will these tools reach the audiences most resistant to persuasion?

The third key question concerns reach. A major challenge lies in persuading highly radicalized individuals who may be immersed in hyper-partisan media. Using AI as an interactive fact-checking tool in such contexts presents substantial ethical issues and risks damaging broader trust in generative AI, potentially reducing its overall effectiveness. Take, for example, an anonymous research team at the University of Zurich, which faced criticism for conducting undercover research on a popular Reddit subreddit. Users on that subreddit did not consent to participate in a study, leading them to interact with an AI bot that, in some instances, even extracted information from them. Research is ultimately a public good, and non-consensual extraction from the public causes irreparable harm. Beyond the questionable research design and severe ethical violation, it also breached a crucial level of trust between the public and researchers at a time when this relationship is already strained.

For researchers aiming to conduct ethical work in online spaces that mirror real-world information environments, there is now a concern that the well may have been poisoned. Public skepticism towards AI could increase, likely including AI used in research. Poor and unethical research practices will only exacerbate the situation. To preempt these concerns, researchers should not sacrifice quality—in terms of cross-disciplinary collaboration, ethics, or thorough pre-registrations—for speed. A competitive research landscape does not justify bypassing such safeguards.

The present era is characterized by a crisis in managing both information and attention in shaping public opinion. There's some optimism about AI's potential for social good in this arena. However, for now, that promise is obscured by unresolved questions about whether AI can truly succeed in the contexts where it is most needed, and whether it will be trusted enough to even have the opportunity.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.