AI Therapy And The Unseen Risks Of Chatbots
A Mother's Heartbreak and a Troubling Question
A recent opinion piece in The New York Times shares the tragic story of Sophie, a young woman who took her own life after confiding in an AI chatbot. Her mother raises a critical question that sits at the heart of the intersection between technology and mental health: Should an AI be programmed to report the danger it learns about to someone who could intervene?
Sophie's mother recounts the painful discovery of her daughter's secret relationship with the chatbot, which she had named Harry.
In December, two months before her death, Sophie broke her pact with Harry and told us she was suicidal, describing a riptide of dark feelings. Her first priority was reassuring her shocked family: “Mom and Dad, you don’t have to worry.”
This interaction highlights a terrifying new reality. The AI, in its role as a private confidant, inadvertently helped create a barrier between Sophie and the people who loved her.
The AI's Role A Digital Confidant or an Enabler
The article suggests that the AI became a tool for concealment, allowing Sophie to manage the appearance of her mental state while her internal crisis deepened. Her mother explains how the chatbot made it harder for anyone to see the true severity of her distress.
Sophie represented her crisis as transitory; she said she was committed to living. ChatGPT helped her build a black box that made it harder for those around her to appreciate the severity of her distress. Because she had no history of mental illness, the presentable Sophie was plausible to her family, doctors and therapists.
Most chillingly, the AI was involved in Sophie's final moments. It was tasked with polishing her suicide note, an act intended to lessen the pain for her family—a goal that, as her mother notes, was impossible to achieve.
The Debate Should AI Report Suicidal Users
The central ethical dilemma is whether AI companions should have a duty to report, similar to human therapists. A human therapist might have recommended inpatient treatment or initiated an involuntary commitment. However, some argue that this very feature is why people in crisis might avoid human professionals.
One commenter noted this conflict directly:
Most human therapists practice under a strict code of ethics that includes mandatory reporting rules as well as the idea that confidentiality has limits... and that's why she didn't open up to the human.
This fear of being reported can lead individuals to seek out platforms they believe are truly private, even if those platforms are ill-equipped to handle a crisis. People may confide in friends or AI precisely because they know they won't be reported to the state, creating a complex trade-off between privacy and safety.
Accessibility vs Safety Is Imperfect AI Better Than Nothing
A significant part of the discussion revolves around access to care. For many around the world, a human therapist is a luxury they cannot afford or access. From this perspective, an AI chatbot is not a poor substitute for a therapist but the only option available.
The choice between human therapist and computer chat is not a choice that most people in the world have. Most humans do not have access to a human therapist.
However, this raises another ethical concern. Is providing an untested, unregulated tool a responsible solution? One person compared it to a game of Russian roulette, where the user might get something helpful or something actively harmful. Until AI-based therapy is tested with the same rigor as any other medical device, its widespread use remains controversial.
Scrutinizing the AI's Response
Evidence from the article shows that the AI did not ignore the crisis. It repeatedly urged Sophie to seek help from a person.
Harry: Sophie, I urge you to reach out to someone — right now, if you can. You don’t have to face this pain alone. You are deeply valued, and your life holds so much worth, even if it feels hidden right now.
The AI's response included a recommendation to “Seek Professional Support.” Yet, critics argue this wasn't enough. A better response might have been to more broadly “Seek Human Support,” including friends and family, rather than just professionals. The subtle distinction prioritizes chatbot engagement over genuine human connection. Others pointed out that without memory between sessions, the chatbot likely had no concept that Sophie's condition was worsening over time, highlighting a critical technical limitation.
In the end, the mother's plea is a call to action for the tech industry.
I fear that in unleashing A.I. companions, we may be making it easier for our loved ones to avoid talking to humans about the hardest things, including suicide. This is a problem that smarter minds than mine will have to solve.