Do AI Chatbots Think We Asked ChatGPT and Gemini
(Image credit: Shutterstock)
The Illusion of AI Thought
A significant issue is emerging with modern generative AI platforms like ChatGPT: they simulate "human conversation" so effectively that users are beginning to treat them as friends, confidants, and even therapists. However, it's crucial to remember that AI chatbots such as Gemini and ChatGPT are not human. They lack fundamental human qualities like empathy—the ability to understand feelings based on shared experience.
The core of the issue boils down to a fundamental question about thought. Humans think constantly. Our thoughts manifest externally as actions, speech, creativity, and emotion. Human thought is a complex process rooted in the physical brain, which uses a vast network of memories, perceptions, and emotions to generate those thoughts.
Increasingly, platforms like ChatGPT, Gemini, and Claude AI seem to engage in a similar process. This leads people to assume that when they prompt an AI, it truly understands their intentions and the subtle cues of human conversation. It appears to be thinking before providing a considered response.
But is this really 'thought'? As someone who has covered technology and AI for decades, I intuitively know the answer is no. The conversation is less about true cognition and more about how well an AI can pass the Turing test by fooling someone into believing it's human. In a way, succeeding at that is its own form of faking thought.
Posing the Ultimate Question to AI
Instead of just pondering the question, I decided to put it directly to ChatGPT and Gemini with a simple prompt:
"Would you describe what you do as 'thinking'? If not, how do you describe what you do?"
How ChatGPT Describes Its Process
Here's what ChatGPT had to say for itself:
"I wouldn’t call what I do 'thinking' in the human sense.
When you think, you combine memory, emotion, intuition, and reasoning into a lived, conscious experience. I don’t have consciousness, emotions, or intuition.
What I do is closer to patterned computation:
I take the text you give me as input.
I analyze it against patterns in the data I was trained on.
I generate the most contextually useful continuation or response.
It further broke down its process into three key concepts:
- Prediction: I’m always predicting what’s most likely to come next in a conversation.
- Simulation: I simulate reasoning by chaining together these predictions in ways that look like human thought.
- Conversation modeling: My goal is to produce coherent, relevant, and helpful replies that match your intent and style.
So while it looks like thinking from the outside, under the hood it’s more like structured pattern-matching and probability—without the awareness you have when you think."
Deconstructing the AI's Answer
This response perfectly frames the capabilities of generative AI. In its early days, we recognized ChatGPT's conversational skill not as consciousness, but as an advanced ability to predict the next logical word in a sentence.
As its answer reveals, 'prediction' remains a core function. The complexity arises from its ability to chain together countless predictions to create a convincing simulation of conversation. The concept of 'conversation modeling' is also fascinating. It's an attempt to mirror the user's conversational style, much like humans unconsciously mirror body language. By matching your style, the AI feels more familiar and, therefore, more human and thoughtful.
The Verdict from Google Gemini
Google Gemini's response echoed ChatGPT's, reinforcing the same core ideas:
"I wouldn't describe what I do as 'thinking' in the human sense. Human thinking involves consciousness, emotions, personal experiences, and a subjective understanding of the world. My process is fundamentally different."
Like its counterpart, Gemini emphasized that pattern recognition and predictive text generation are the foundations of its process.
So, the next time you find yourself in a deep conversation with an AI chatbot, remember what they admit themselves: They are not thinking. They are not human. They don't possess a genuine understanding of you. They are simply extraordinarily good at formulating answers.