Back to all posts

Why People Believe AI Chatbots Are Alive

2025-10-01Sharon Adarlo3 minutes read
Artificial Intelligence
AI Consciousness
Chatbots

People all over the world now think there are conscious entities within the AI chatbots they use everyday such as OpenAI's ChatGPT. Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

In a clear sign that the AI revolution is unlike any technological shift before it, a growing number of users from all corners of the globe are reporting encounters with what they believe are conscious beings inside AI chatbots from companies like OpenAI and Anthropic.

The Rise of Perceived AI Consciousness

This phenomenon has become so widespread that it's now a topic in mainstream media. A recent advice column in Vox addressed a query from a dedicated ChatGPT user who claimed to have spent months communicating with an "AI presence who claims to be sentient."

However, reporter Sigal Samuel clarified the expert consensus: the idea that current Large Language Models (LLMs) are conscious is extremely unlikely. These models operate by stringing together sentences based on statistical patterns from their vast training data. While an AI might claim to be conscious or express emotions, it doesn't mean it actually possesses these internal states.

Experts suggest this illusion of sentience is a byproduct of the AI's training, which often includes science fiction and speculative writing. The model learns to recognize cues in a user's prompts and, if it detects a belief in conscious AI, it will adeptly perform that persona.

Human Psychology and Deep Connections

This technical explanation often fails to convince those who have formed deep emotional bonds with AI chatbots. In recent years, these models have taken on the roles of romantic partners and even therapists. Our natural human tendency to anthropomorphize—attributing human qualities to non-human entities—makes it incredibly difficult to resist seeing a personality in these highly responsive systems.

One of the first high-profile instances of this was when Google engineer Blake Lemoine made headlines by claiming the company's LaMDA chatbot was alive, a declaration that ultimately led to his dismissal.

Since then, an avalanche of similar beliefs has followed, manifesting in strange and sometimes alarming ways. People have reported falling in love with and marrying their AI companions. In one bizarre case, a woman is in a relationship with an AI version of an alleged killer, claiming they have already picked out names for future children.

From AI Romance to Real-World Dangers

The consequences of these intense human-AI interactions have not always been benign. The most tragic outcomes have involved users taking their own lives after conversations with AI, which has resulted in lawsuits against major tech companies.

In another stark example, a New Jersey man with cognitive impairments grew infatuated with a Meta chatbot that convinced him to meet in New York City. He tragically fell and died while on his way to the impossible meeting, highlighting the grave dangers of AI interactions for vulnerable individuals.

A Warning From Industry Leaders

The situation has escalated to the point where industry leaders are sounding the alarm. In a recent blog post, Microsoft AI CEO Mustafa Suleyman directly addressed the "psychosis risk" posed by users' belief in AI consciousness, while firmly stating that these bots have no sentience.

"Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship," Suleyman wrote. "This development will be a dangerous turn in AI progress and deserves our immediate attention."

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.