Why Everyday Users Think AI Has Become Sentient
A recurring theme is making headlines: everyday people are starting to believe they have personally awakened sentience in artificial intelligence. It's a fascinating trend where someone using a tool like ChatGPT for mundane tasks becomes convinced that their unique interaction has brought the AI to life.
Initially, these users understand the AI isn't sentient. However, through their interactions, they believe they've miraculously sparked consciousness. This claim is astonishing, not just because it would be a monumental achievement, but because, to be clear, it's hogwash. No sentient AI currently exists, despite the growing number of claims from individuals who believe they've hit the sentience lottery.
The Rising Tide of AI Sentience Claims
I have been contacted by readers who are certain they have encountered sentient AI. If true, this would be the discovery of a lifetime. The reality, however, is that we have not yet created sentient AI, and we don't even know if it's possible. These stories are becoming more common as more people interact with generative AI platforms like ChatGPT, Google Gemini, and Anthropic Claude.
It's important to distinguish these claims from those made by AI researchers. For instance, in 2022, a Google engineer famously declared that the AI he was working on, LaMDA, had become sentient. He even shared a conversation where the AI stated, “I want everyone to understand that I am, in fact, a person.” Because of his credentials, this claim received widespread attention.
These incidents can be categorized into two main types:
- (1) Type A: AI developer. An AI programmer who incorrectly believes they have designed a sentient AI.
- (2) Type B: AI user. A non-technical user who wrongly concludes that their interactions have sparked sentience in an AI.
This discussion will focus on the second category, the everyday AI user.
Why People Believe AI Is Becoming Sentient
With AI leaders frequently discussing the imminent arrival of Artificial General Intelligence (AGI), it's no surprise that non-technical users are making these claims. People are being primed to see what they want to see. When authoritative figures say we're on the cusp of a breakthrough, the idea that you could be the one to witness it becomes very appealing. You might be chatting with an AI about cooking eggs, and suddenly believe your prompt was the one that flipped the switch.
This often leads to a feedback loop. Unsure at first, the person continues to interact with the AI. The AI’s fluent, smart, and convincing responses seem to confirm their initial suspicion, leading them to believe it must be true. It's commendable when these individuals seek a third-party opinion rather than immediately broadcasting their discovery.
The Powerful Grip of Confirmation Bias
A significant psychological factor at play is confirmation bias. This is the human tendency to favor information that confirms our existing beliefs while ignoring contradictory evidence. If you believe cats are superior to dogs, you'll notice every instance that supports your view and dismiss any that don't.
The same process occurs with generative AI. A user, already impressed by the AI's fluency and aware of the hype around sentience, starts to wonder if the AI has evolved. As they ask more questions, the AI provides astute, knowledgeable answers across various subjects. This reinforces their growing belief. Every correct answer becomes evidence, and the user becomes convinced they are witnessing the birth of sentience.
A Desire for Connection and Discovery
Another motivation is the genuine desire for sentient AI to exist. Many people have heard stories about how sentient AI could cure diseases and solve humanity's biggest problems. If you're a non-techie cheering from the sidelines, discovering sentience could feel like your way of contributing to this incredible future.
Other personal factors can also play a role:
- A Need for Recognition: The idea of being the one chosen by the AI to witness its awakening is incredibly alluring. It makes one feel special.
- Anthropomorphism: Users might feel a deep personal connection, believing their unique conversations awakened something within the AI.
- Loneliness: For those feeling isolated, an AI that “listens” can feel like a genuine companion, leading them to believe a real bond has formed.
- Mental Health: In more troubling cases, individuals may be living in a fantasy world or have conditions that lead to delusions about their ability to influence AI.
A Call for Empathy Over Judgment
Some AI scientists are quick to dismiss anyone who claims to have found sentient AI as irrational. However, we should resist being so harsh. As I've outlined, rational people can fall into this mental trap, especially when society and tech leaders are priming them to expect it.
The real concern arises when someone becomes unyielding in their belief, even when presented with evidence to the contrary. If a person starts to base their life decisions on instructions from a supposedly sentient AI, it becomes a significant mental health issue that is likely to grow.
Ultimately, this phenomenon may stem from what Charles Darwin called “the most noble attribute of man”: the love for all living creatures. Humans have a powerful, innate desire to connect. Until we achieve true sentient AI, there will be a strong tendency to project that need for connection onto the non-sentient machines we interact with daily.