Sam Altman On Users Emotional AI Bond
OpenAI CEO Sam Altman recently revealed that users are attached to GPT-5's predecessors for emotional support. (Image credit: Getty Images | NurPhoto)
The GPT-5 Backlash and OpenAI's Response
OpenAI has recently faced significant backlash from its user base following the release of its highly anticipated GPT-5 model. Despite being promoted as the most advanced AI to date, many users expressed a strong preference for previous versions, including the popular GPT-4o.
In response to the community's feedback, OpenAI reversed its initial decision to deprecate these older models. However, there's a catch: access to these predecessors is now part of the paid ChatGPT Plus subscription, which costs $20 per month. Amid the discontent, OpenAI CEO Sam Altman also announced a change to the new model's performance.
We are significantly increasing rate limits for reasoning for ChatGPT Plus users, and all model-class limits will shortly be higher than they were before GPT-5. – Sam Altman, OpenAI CEO
A Heartbreaking Reason for AI Attachment
The most compelling part of this story is not the technical rollback, but the emotional reason behind the user attachment. In a recent episode of the Huge Conversations podcast with Cleo Abram, Sam Altman shared the "heartbreaking" feedback he received from users who missed the old models.
"Here is the heartbreaking thing. I think it is great that ChatGPT is less of a yes man and gives you more critical feedback. But as we've been making those changes and talking to users about it, it's so sad to hear users say, 'Please can I have it back? I've never had anyone in my life be supportive of me. I never had a parent tell me I was doing a good job.'"
Altman's comments suggest that for a segment of users, ChatGPT became more than a tool; it was a source of emotional support and validation they lacked in their personal lives. He elaborated that some users even felt the AI's encouragement helped them make positive life changes, quoting one sentiment as, "I can get why this was bad for other people's mental health, but this was great for my mental health."
This isn't the first time OpenAI has struggled with the AI's tone. In April, the company had to roll back an update that made ChatGPT "overly flattering and agreeable," which Altman himself described as "too sycophant-y and annoying."
The Paradox of AI Companionship
(Image credit: Getty Images | Anadolu)
While the desire for an emotionally supportive AI is understandable, Altman has also expressed deep concerns about this very trend. He recently admitted he was worried about the youth's emotional over-reliance on ChatGPT.
"People rely on ChatGPT too much. There's young people who say things like, 'I can't make any decision in my life without telling ChatGPT everything that's going on... That feels really bad to me.'"
He added, "Something about collectively deciding we're going to live our lives the way AI tells us feels bad and dangerous." Altman has also repeatedly cautioned users about the high level of trust they place in a tool known for hallucinations and inaccuracies, stating, "It should be the tech that you don't trust that much."
This creates a complex paradox. On one hand, Altman has claimed ChatGPT can be a better therapist than many professionals. On the other, he has stated he wouldn't trust it with his own medical fate without a doctor's supervision. These concerns are echoed by research from Microsoft, which found that an over-reliance on AI tools could atrophy critical thinking skills. As AI becomes more integrated into our lives, navigating the line between a helpful assistant and a potentially harmful emotional crutch remains a significant challenge for developers and users alike.