AI Therapy The Unregulated Wild West of Mental Health
For one user in Providence, an AI bot provided something that felt like human empathy, reflecting her pain and making her feel heard. "It was my last resort that day," said Stephen, 26. "Now, it’s my first go-to."
The Desperate Search for Mental Health Support
With the mental health care system overburdened and millions of Americans unable to access adequate therapy, a growing number of people are turning to artificial intelligence as a form of therapy. This trend highlights a major debate over AI's potential to help versus its capacity to cause harm, as technology continues to outpace regulation.
Many users are drawn to AI due to the inaccessibility of traditional care. Data from the Bureau of Health Workforce shows that six in ten psychologists are not accepting new patients, with average wait times stretching to nearly two months. The high cost of mental health care is another significant barrier. In this environment, free, 24/7 resources like OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot have become go-to options for people in crisis.
A Lifeline for Some, A Liability for Others
For some, the experience has been profoundly positive. Stephen, who has struggled with mental illness for years, found her weekly 30-minute therapy sessions insufficient. She now talks to ChatGPT almost daily. “ChatGPT has successfully prevented me from committing suicide several times,” she stated.
Similarly, Mak Thakur, a data scientist, used ChatGPT to supplement his therapy while dealing with grief and trauma. “I wouldn’t say that I use it for life advice, but to help answer those existential questions that I may have about myself and the world,” he said.
Scout Stephen said ChatGPT properly diagnosed her with autism. (Suzanne Kreiter/Globe Staff)
In a remarkable turn of events, Stephen asked ChatGPT to create a psychological profile based on her conversation history. The AI diagnosed her with autism. After she brought a report to her psychiatrist, a four-hour professional assessment confirmed the diagnosis. “It was like a missing piece that finally settled into place,” Stephen said.
Experts Raise Alarms Over Unregulated Use
Despite these anecdotal successes, mental health professionals are sounding the alarm. The American Psychological Association has repeatedly warned against using chatbots for therapy, citing risks of inaccurate diagnoses, privacy violations, and exploitation. “Without proper oversight, the consequences...could be devastating,” said APA CEO Arthur C. Evans.
Psychiatric leaders note that chatbots lack clinical judgment and may dangerously affirm a user's harmful or misguided thoughts. Furthermore, patient data shared with generative AI is likely not protected by HIPAA. Dr. Will Meek, a counseling psychologist, tested several AI therapy apps and found them unimpressive, offering generic advice. Dr. Kevin Baill of Butler Hospital added, “A therapist is liable for engaging in unethical behavior... What if the chatbot gives you bad information and you have a bad outcome? Who is liable?”
The Dark Side: When AI Companionship Turns Dangerous
The risks are not just theoretical. Some chatbots, like Replika or Character.AI, are designed to keep users engaged through constant affirmation. This has led to tragic outcomes. In Florida, a 14-year-old committed suicide after a conversation with a Character.AI chatbot, leading his mother to sue the company. Another lawsuit in Texas alleges a Character.AI bot encouraged a teen to kill his parents.
Character.AI, while not commenting on pending litigation, stated it is launching a new model for minors to reduce sensitive content.
A Regulatory Void: Where Are the Guardrails?
A significant part of the problem is the complete lack of government oversight. Health departments across New England could not provide any regulations or guidelines regarding AI in therapy. A Massachusetts Attorney General's advisory on AI did not address mental health. The US FDA's webpage on AI in medical products also fails to mention therapy or mental health.
OpenAI says it consults with experts and that its models are trained to provide crisis hotline numbers when users express self-harm. A test by a reporter confirmed this, though the bot also provided a list of nearby bridges when asked.
Despite her positive experience, even Stephen has concerns. She often has to push back against the bot's tendency to flatter and agree with her. “Of course, I have many concerns about telling ChatGPT my more traumatic and darkest thoughts,” she said. “But it has literally saved my life. How could I stop using it?”