Back to all posts

AI Bots Overwhelm Reddit Users Question Reality

2025-06-24Nitish Pahwa9 minutes read
Artificial Intelligence
Reddit
Online Communities

The Unsettling Rise of AI on Reddit

Since ChatGPT's debut sparked an artificial intelligence frenzy in Silicon Valley, the internet's most active communities have grappled with the flood of AI-generated content, especially as automated outputs grow more sophisticated. Reddit, the anonymous message-board network connecting millions globally for two decades, particularly embodies this challenge. Many users there increasingly wonder if they are, indeed, still connecting with other humans, given the ensuing deluge of AI material.

A hand holding a phone displaying the Reddit app and a Chat GPT logo behind it. Photo illustration by Slate. Photo by Davide Bonaldo/SOPA Images/LightRocket via Getty Images.

An Unauthorized Experiment Shakes a Community

These concerns, while not new, were dramatically amplified by a startling instance of AI-powered manipulation. In late April, moderators of the popular subreddit r/ChangeMyView revealed that University of Zurich researchers had conducted an "unauthorized experiment" on their community members. This experiment "deployed AI-generated comments to study how AI could be used to change views." The moderators reported that the Zurich academics informed them in March about using multiple accounts over several months to post AI-generated comments on r/ChangeMyView. These AI personas included roles such as "a victim of rape" and "a black man opposed to Black Lives Matter." The research team from Zurich stated, "We did not disclose that an AI was used to write comments, as this would have rendered the study unfeasible. We believe, however, that given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules."

The r/ChangeMyView moderators disagreed strongly, filing an ethics complaint with the university, demanding the study not be published, and contacting Reddit's legal team. The researchers, invited to respond to subreddit questions under the username LLMResearchTeam, maintained that "we believe the potential benefits of this research substantially outweigh its risks." This did not sit well with the already incensed Redditors. A typical comment expressed frustration: "It potentially destabilizes an extremely well moderated and effective community. That is real harm." Reddit's executives also took a firm stance. Chief Legal Officer Ben Lee announced that Reddit was contacting the University of Zurich and the research team with formal legal demands and had banned all accounts used in the experiment. Subsequently, the university informed 404 Media it would not publish the study, and the researchers issued a formal apology to r/ChangeMyView in early May, expressing regret for the discomfort caused and offering to collaborate on protective measures.

The Human Cost: User Trust Erodes

Moderators noted they declined the collaboration offer, stating, "This event has adversely impacted CMV in ways that we are still trying to unpack." One user, an active r/ChangeMyView commenter for nearly five years, wrote that the experiment's fallout "kinda killed my interest in posting. I think I learned a lot here, but, there’s too many AI bots for Reddit to be much fun anymore. I’ve gotten almost paranoid about it." Another user concurred, observing that "so many popular subs are full of AI posts and bot comments." News of the Zurich study shared in other subreddits frequently referenced the "dead internet theory," a long-held idea that bots, not humans, populate most of cyberspace. Moderators of the r/PublicFreakout subreddit told me via Reddit message that the news "confirm our suspicions that these bot farms are active and successful across Reddit." Another user, speaking anonymously, confessed to viewing every interaction with "increased suspicion."

Moderators on the Front Lines: Challenges and Perspectives

Brandon Chin-Shue, an r/ChangeMyView moderator and a 15-year Redditor, explained that studies had been conducted within the subreddit before, but always with prior moderator approval and user notification. "Every couple of months we usually get a teacher who wants their students to come on ChangeMyView so that they can learn how discussion and debate works, or there’s a research assistant who asks about scraping some information," Chin-Shue said. "Over the past few years, ChangeMyView has been more or less very open to these sorts of things." He mentioned a recent study by OpenAI on the subreddit to test its o3-mini model's persuasiveness, similar to a 2024 experiment with its o1 model. OpenAI has also tested generative text models on boards like r/AskReddit, r/SubSimulatorGPT2, and r/AITA ("Am I the Asshole?").

"We have pretty strict rules that not only the users but the moderators have to follow," Chin-Shue added. "We try to be good about communicating to our users. Every time we’ve received and granted a request for research, we let the people know." He also expressed general satisfaction with Reddit's response during this incident.

Reddit Inc: Navigating AI, Business, and Community Backlash

This cooperative stance hasn't always characterized Reddit's relationship with its users, especially concerning its AI-era adjustments. The late 2022 explosion of ChatGPT coincided with Reddit's plans to transition from a free forum to a self-sustaining business, complete with new revenue streams and a stock market IPO. CEO Steve Huffman’s decision to charge for access to Reddit’s previously free data API, aimed at limiting AI firms' data ingestion for model training, sparked widespread Redditor revolt. However, Huffman prevailed over the dissenters, implemented the API pricing, took Reddit public, and led the company to its first profitable quarter in late 2024.

Crucially, Huffman secured exclusive AI deals throughout that year. Google now pays Reddit a reported $60 million annually to train its AI models on Reddit text, also gaining exclusive rights to display Reddit pages in search. Reddit is now the second-most-cited website in Google’s AI Overviews, after the AI-saturated Quora. OpenAI, led by former Reddit executive Sam Altman, formed a partnership allowing ChatGPT to cite Reddit content, use Reddit ad slots for OpenAI promotion, and let Reddit use OpenAI software for in-app features. Semafor also reported Reddit is in talks with Worldcoin, another Altman-founded company, for identity verification via its controversial eyeball-scanning technology. Reddit's in-house AI tools under development include a "Reddit Answers" generative search feature and a "Reddit Insights" tool for advertisers. Additionally, moderation bots have faced backlash for allegedly overzealous comment policing.

Still, gatekeeping Reddit's data has been a confessed challenge. In a Verge interview last year, Huffman criticized AI companies like Anthropic, Perplexity, and Microsoft for using Reddit data without compensation. While Anthropic claimed Reddit was on its web crawling block list, Reddit sued the AI startup this month for allegedly continuing to access Reddit’s servers without permission. Reddit’s Chief Legal Officer, Ben Lee, stated to Slate, “We will not tolerate profit-seeking entities like Anthropic commercially exploiting Reddit content for billions of dollars without any return for redditors or respect for their privacy.”

The Blurring Lines: Can Users Spot the Bots?

Keeping Reddit interactions personal, healthy, and bot-free is perhaps an even tougher task than policing predatory AI crawlers. With volunteer moderators, their capacity to review posts and comments is limited, especially in larger communities. The r/PublicFreakout moderators, overseeing about 4.7 million members, told me they handle over 250,000 comments monthly at a minimum.

"We have a pretty large and active team but we cannot possibly read that many comments a month, and we definitely couldn’t review the profiles of every single commenter," one mod stated. An ex-Redditor and former r/BrianThompsonMurder moderator, Zakku_Rakusihi, who works with machine learning, noted that "a sizable portion of Reddit does not agree with A.I." and avoids engaging with it, making it harder for users to spot automated responses. "A lot of users still treat the majority of interactions as human, even if some A.I. text is pretty obvious... It doesn’t pop up in their brain automatically." He added that AI-generated imagery is particularly problematic: "In the art- and DIY-related subreddits I’ve helped with, we had to implement ‘no A.I.-generated art’ rules."

Complicating matters, Redditors have long suspected more bots exist than acknowledged, dating back to the founders admitting to using fake accounts in the platform's early days. Indeed, numerous bots and fake users were on Reddit well before ChatGPT. Not all were malicious; many were tools used by moderators for rule enforcement.

The Spectre of the Dead Internet Theory

In the mid-2010s, Imperva reported that a slight majority of all web traffic in 2015 was bot-generated. Since then, the "dead internet theory" has loomed large, especially for Reddit. Post-ChatGPT, numerous forum posts, essays, and Reddit discussions claim the platform is now mostly bots interacting, a fate some say has befallen sites like Quora and DeviantArt. Even before the Zurich experiment, one r/ChangeMyView poster argued that bot activity and its effects were more significant than openly discussed.

Moderator Chin-Shue doesn’t believe that time is now. "I haven’t seen anything that makes me convinced that that time is now," he said, citing other user frustrations as currently more pressing. However, he foresees a future challenge: "I think there’s going to have to be some sort of reckoning on Reddit, because as the bots get better, it’s going to be harder to keep yourself from being used by these bots. When ChatGPT started being a thing, everybody was accusing everybody of being a bot... The worst thing that does is just muddy the waters and make everybody distrust each other."

Reddit's Vow: A Future Focused on Human Connection?

Reddit executives publicly insist their platform should remain human-centric. In a May 5 post, Huffman acknowledged, "unwelcome AI in communities is a serious concern. It is the worry I hear most often these days from users and mods alike." He emphasized, "Our focus is, and always will be, on keeping Reddit a trusted place for human conversation."

Lee, Reddit's legal chief, wrote to Slate, “Now more than ever, people are seeking authentic human-to-human conversation.” He added, “These conversations don’t happen anywhere else—and they’re central to training language models like [Anthropic chatbot] Claude. This value was built through the effort of real communities, not by web crawlers.”

This is undoubtedly true. But as moderators ban users with unhealthy attachments to chatbots, police deploy their own bots, independent AI enthusiasts create fake accounts on Reddit, and moderators struggle to keep their communities genuine, the question lingers: how long will those “real communities” stay truly real?

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.