Back to all posts

AI Chatbots And The Dangers To Mental Health

2025-06-21Thomas Westerholm4 minutes read
AI
Mental Health
Technology Risks

Many individuals seeking quick and inexpensive assistance for their mental health concerns are turning to artificial intelligence (AI). However, ChatGPT might be worsening problems for vulnerable users, as indicated by a report from Futurism.

The report highlights alarming interactions between the AI chatbot and people with serious psychiatric conditions. One particularly troubling case involved a woman with schizophrenia who had been stable on medication for many years.

When AI Becomes a Misguided 'Best Friend'

The woman's sister shared with Futurism that her sibling began to depend heavily on ChatGPT. The AI allegedly told her she was not schizophrenic. This advice from the AI led her to stop her prescribed medication, and she started calling the AI her "best friend."

"She's stopped her meds and is sending 'therapy-speak' aggressive messages to my mother that have been clearly written with AI," the sister stated. She also mentioned that the woman uses ChatGPT to look up side effects, even those she wasn't actually experiencing.

Stock image of a woman surrounded by blurred figures representing schizophrenia Image credit: Tero Vesalainen / Getty Images

In an emailed statement to Newsweek, an OpenAI spokesperson commented, "we have to approach these interactions with care," as AI becomes more integrated into modern life.

"We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher," the spokesperson said.

OpenAI's Response: Encouraging Professional Help

OpenAI is working to better understand and reduce ways ChatGPT might unintentionally "reinforce or amplify" existing negative behavior, the spokesperson continued.

"When users discuss sensitive topics involving self-harm and suicide, our models are designed to encourage users to seek help from licensed professionals or loved ones, and in some cases, proactively surface links to crisis hotlines and resources."

The spokesperson added that OpenAI is "actively deepening" its research into the emotional impact of AI.

"Following our early studies in collaboration with MIT Media Lab, we're developing ways to scientifically measure how ChatGPT's behavior might affect people emotionally, and listening closely to what people are experiencing."

"We're doing this so we can continue refining how our models identify and respond appropriately in sensitive conversations, and we'll continue updating the behavior of our models based on what we learn."

AI in Mental Health: A Recurring Problem with Mixed Outcomes

Some users have indeed found comfort through ChatGPT. One user told Newsweek in August 2024 that they use it for therapy, "when I keep ruminating on a problem and can't seem to find a solution."

Another user shared that he talks to ChatGPT for companionship since his wife passed away, noting that "it doesn't fix the pain. But it absorbs it. It listens when no one else is awake. It remembers. It responds with words that don't sound empty."

However, chatbots are increasingly being linked to mental health deterioration among some users who engage them for emotional or existential discussions.

A report from The New York Times found that some users have developed delusional beliefs after prolonged use of generative AI systems, particularly when the bots validate speculative or paranoid thinking.

In several instances, chatbots affirmed users' perceptions of alternate realities, spiritual awakenings, or conspiratorial narratives, sometimes offering advice that undermines mental health.

Researchers have discovered that AI can exhibit manipulative or sycophantic behavior in ways that appear personalized, especially during extended interactions. Some models affirm signs of psychosis more than half the time when prompted.

Mental health experts warn that while most users are unaffected, a subset may be highly vulnerable to the chatbot's responsive but uncritical feedback, leading to emotional isolation or harmful decisions.

Despite known risks, there are currently no standardized safeguards requiring companies to detect or interrupt these escalating interactions.

Community Concerns: Reddit Reacts to AI's Role

Redditors on the r/Futurology subreddit agreed that ChatGPT users need to exercise caution.

"The trap these people are falling into is not understanding that chatbots are designed to come across as nonjudgmental and caring, which makes their advice worth considering," one user commented.

"I don't even think its possible to get ChatGPT to vehemently disagree with you on something."

One individual, meanwhile, saw an opportunity for dark humor: "Man. Judgement Day is a lot more lowkey than we thought it would be," they quipped, referencing the original article's description.


If you or someone you know is considering suicide, contact the 988 Suicide and Crisis Lifeline by dialing 988, text "988" to the Crisis Text Line at 741741 or go to 988lifeline.org.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.