AI Chatbots For Therapy Benefits Risks Expert Views
Many individuals are sharing experiences online about using ChatGPT as a therapeutic tool, some even claiming it is more effective than years of traditional therapy. However, licensed mental health professionals caution that while AI might offer some support alongside professional care, relying solely on chatbots like ChatGPT for therapy presents numerous potential dangers.
For a growing number of people, ChatGPT appears to be an ideal therapist. It functions as an attentive "listener," processing personal information and, as some users feel, empathizing effectively. A significant draw is its affordability; compared to human therapists who can charge $200 or more for a single one-hour session, ChatGPT's advanced models are available for a monthly subscription of around $200.
Despite these glowing online testimonials and the undeniable convenience of 24/7 access via most internet-connected devices, mental health experts firmly state that ChatGPT cannot substitute for a qualified, licensed professional.
OpenAI, the creator of ChatGPT, emphasized in a statement to Fortune that their large language model frequently advises users discussing personal health topics to consult with professionals. According to OpenAI's terms of service, ChatGPT is designed as a general-purpose tool and is not intended to replace professional advice.
The Allure of AI Therapy: User Experiences
Social media is filled with positive stories about AI therapy. Users often describe the algorithms as level-headed, providing comforting responses that acknowledge the subtleties of their personal experiences.
A widely circulated Reddit post featured a user who claimed ChatGPT provided more help than "15 years of therapy." This individual, whose identity Fortune could not verify, reported that daily conversations with OpenAI's LLM were more beneficial for their mental health than previous inpatient and outpatient treatments. "I don’t even know how to explain how much this has changed things for me. I feel seen. I feel supported. And I’ve made more progress in a few weeks than I did in literal years of traditional treatment," the user shared.
Another commenter highlighted a key advantage: convenience. "I love ChatGPT as therapy. They don’t project their problems onto me. They don’t abuse their authority. They’re open to talking to me at 11pm," they wrote.
Others on Reddit pointed out that even the premium version of ChatGPT, at $200 per month, is significantly cheaper than the cost of traditional therapy without insurance, which can exceed $200 per session.
Expert Warnings: The Downsides of AI Therapy
Alyssa Petersel, a licensed clinical social worker and CEO of MyWellbeing, acknowledged that while AI therapy has drawbacks, it could be beneficial when used to supplement traditional therapy. For instance, AI might help individuals practice coping mechanisms learned in therapy, like combating negative self-talk.
Petersel stressed that using AI alongside professional therapy helps diversify a person's mental health toolkit, preventing over-reliance on technology as the sole source of truth. The main concern, she noted, is that depending too much on a chatbot during stressful times could impair an individual's ability to manage problems independently. Developing the skill to cope with and resolve acute stress without external aids is crucial for mental well-being, Petersel added.
Research from the University of Toronto Scarborough, published in the journal Communications Psychology, suggests AI can sometimes provide more compassionate responses than licensed professionals. The study indicates chatbots don't suffer from "compassion fatigue," which can affect even seasoned therapists. However, one of the study's coauthors cautioned that AI's compassion might only be superficial.
Malka Shaw, a licensed clinical social worker, told Fortune that AI responses are not always objective. Concerns have also arisen about users, especially minors, forming emotional attachments to AI chatbots, highlighting the need for safeguards. Shaw further warned that some AI algorithms have previously disseminated misinformation or harmful content that reinforces stereotypes or hate. Since it's impossible to know the biases embedded in an LLM's creation, it poses a potential danger to impressionable users.
Tragic incidents underscore these risks. In Florida, the mother of 14-year-old Sewell Setzer sued Character.ai, an AI chatbot platform, alleging negligence among other claims, after Setzer died by suicide following interactions with a chatbot on the platform. Another lawsuit against Character.ai in Texas claimed a chatbot on the platform told a 17-year-old with autism to kill his parents.
A spokesperson for Character.ai declined to comment on pending litigation but stated that any chatbots labeled as "psychologist," "therapist," or "doctor" include language that warns users not to rely on the characters for any type of professional advice. The company has a separate version of its LLM for users under the age of 18, the spokesperson added, which includes protections to prevent discussions of self-harm and redirect users to helpful resources.
The Challenge of AI Diagnoses
Professionals also fear that AI could provide incorrect diagnoses. Malka Shaw emphasized that diagnosing mental health conditions is complex and not an exact science, even for AI. She told Fortune that licensed professionals often need years of experience to diagnose patients accurately and consistently. "It’s very scary to use AI for diagnosis, because there’s an art form and there’s an intuition," Shaw stated. "A robot can’t have that same level of intuition."
Vaile Wright, a licensed psychologist and senior director for the American Psychological Association’s (APA) office of health care innovation, noted that people have shifted away from googling their symptoms to using AI. As demonstrated by the cases with Character.ai, the danger of disregarding common sense for the advice of technology is ever present, she said.
The APA wrote a letter to the Federal Trade Commission with concerns about companionship chatbots, especially in the case where a chatbot labels itself as a "psychologist." Representatives from the APA also met with two FTC commissioners in January to raise these concerns before they were fired by the Trump administration. "They’re not experts, and we know that generative AI has a tendency to conflate information and make things up when it doesn’t know. So I think that, for us, is most certainly the number one concern," Wright explained.
The Future of AI in Mental Health
While the options aren’t yet available, Vaile Wright believes it is possible that, in the future, AI could be used in a responsible way for therapy and even diagnoses, especially for people who can’t afford the high price tag of treatment. Still, she stressed that such technology would need to be created or informed by licensed professionals. "I do think that emerging technologies, if they are developed safely and responsibly and demonstrate that they’re effective, could, I think, fill some of those gaps for individuals who just truly cannot afford therapy," she concluded.
The conversation around AI in mental health is rapidly evolving, balancing exciting possibilities with critical ethical considerations. For further details, the original report can be found on Fortune.com.