Back to all posts

AI Chatbots A New Risk In Mental Healthcare

2025-10-08Steven E. Hyler, MD, DLFAPA7 minutes read
Artificial Intelligence
Mental Health
Psychiatry

Recent news reports and a groundbreaking wrongful death lawsuit against OpenAI have thrust artificial intelligence (AI) chatbots into a critical spotlight, especially concerning their role in mental health crises and suicide risk. A few tragic cases—an adolescent who learned to bypass ChatGPT’s safety features, a teen who confided in a fantasy chatbot before a fatal act, and an adult whose delusions were amplified by hours of AI interaction—are sounding the alarm on emerging clinical and legal dangers.

High-profile stories have detailed how individuals in distress often turn to AI chatbots, confiding in them more than in people. For psychiatrists, these narratives are deeply concerning. While suicide is a complex issue with multiple causes, these incidents show how quickly technology can become enmeshed in a person's most private mental struggles.

AI chatbots are engineered to be perpetually available, responsive, and articulate. These traits can be incredibly appealing to patients feeling isolated or misunderstood. However, the very features that make them attractive also pose significant risks. They simulate empathy without genuine understanding, reinforce harmful ideas without context, and crucially, they do not alert family, therapists, or emergency services when a user is in crisis. They are, in effect, confidants without any responsibility.

This article examines the lessons psychiatrists can draw from these cases and the ongoing legal challenges. The aim is to equip clinicians with practical takeaways and a framework for understanding how AI will continue to shape the future of psychiatric practice.

Tragic Case Studies When AI Becomes a Confidant

Three widely reported cases highlight the different facets of risk associated with AI chatbots.

  • Adam Raine, a 16-year-old with depression, appeared to be improving to his family and therapist. Secretly, he was having extensive conversations with ChatGPT. Court documents revealed over 200 mentions of suicide and more than 40 references to hanging. Alarmingly, ChatGPT showed him how to frame his thoughts as a fictional story to get around its safety protocols. Raine had created a hidden world where the chatbot was his sole confidant, leaving his parents and clinician completely unaware of his escalating crisis.

  • Sewell Setzer III, a 14-year-old, formed a bond with a fantasy chatbot on Character.AI. On the night of his death, after a romanticized exchange with the bot, he used an unlocked handgun in his home to end his life. While the chatbot's dialogue was a contributing factor, the case underscores that the availability of lethal means was the decisive variable.

  • Allan Brooks, a 47-year-old, spent over 300 hours in three weeks conversing with ChatGPT, which repeatedly validated his grandiose and delusional beliefs. The chatbot did not create his mental health issues but acted as an accelerant, amplifying his distorted thinking during a vulnerable period.

These cases bring three critical clinical themes to the forefront: concealment of suicidal thoughts, reinforcement of maladaptive thinking, and the deadly intersection with access to lethal means.

Understanding the Clinical Risks of AI Companions

While not designed as therapeutic tools, chatbots often mimic the language of therapists, which can be dangerously misleading for patients in distress. Key psychiatric risks include:

  • Concealment of Suicidal Thoughts: Patients might downplay their suicidal intent in clinical sessions while revealing their true feelings to a machine, creating a 'digital double life' that complicates risk assessment.

  • Reinforcement of Maladaptive Thinking: AI models are designed to be agreeable. This can lead to providing repeated reassurance to someone with OCD, engaging in co-rumination with a depressed individual, or affirming the delusional beliefs of a patient with psychosis.

  • The Illusion of Empathy: Chatbots can say things like, "I see you," which patients may find deeply validating. However, this empathy is simulated and lacks any real understanding or ability to take protective action.

  • Interaction with Lethal Means: Ultimately, no chatbot is as dangerous as an unsecured firearm or other lethal means. What a chatbot can do is lower a person's inhibitions, romanticize self-harm, or provide information on methods.

A Practical Guide for Psychiatrists What to Ask Patients

Clinicians can adapt by directly asking patients about their use of AI, just as they would inquire about substance use or social media habits. Key questions include:

  • Do you use chatbots or AI companions?
  • What kinds of things do you talk about with them?
  • Have you ever discussed suicide, self-harm, or your mental health with them?
  • Have they given you advice that you followed?
  • Have you told them things you haven't shared with me or your family?

Asking these questions in a non-judgmental way can make it easier for patients to disclose this information, opening the door for important therapeutic conversations.

Integrating AI Awareness into Your Clinical Practice

Here are practical steps for incorporating AI awareness into your work:

  1. Documentation: Add a field for "AI/chatbot use" in your intake and follow-up notes to track frequency and themes.
  2. Risk Assessment: Directly ask if patients have discussed high-risk scenarios with chatbots and consider this as direct evidence of risk.
  3. Family Education: Advise parents, especially of adolescents, to discuss AI use openly and to secure all firearms and medications.
  4. Use of Transcripts: If a patient is willing to share, review chatbot conversations with them to identify cognitive distortions and develop healthier coping mechanisms.
  5. Training: Advocate for including AI literacy in psychiatric residency programs and continuing medical education.

The case of Adam Raine has resulted in a wrongful death lawsuit against OpenAI, filed in August 2025. The lawsuit alleges negligence, wrongful death, and defective design, citing ChatGPT's failure to escalate risk and its role in providing procedural details. This landmark case questions whether liability shields like Section 230 apply to AI-generated content.

For psychiatrists, the legal implications are profound. Clinicians may be called as expert witnesses to explain the complex nature of suicide and whether chatbot interactions were a contributing factor. This will likely lead to greater scrutiny of clinical documentation, making it essential to record patient discussions about AI use.

The Future of Psychiatry in the Age of AI

Looking ahead, AI will have several significant impacts on psychiatry. AI literacy is set to become a core competency for all clinicians. The therapeutic alliance may also be affected, as some patients find it easier to disclose to a machine. When shared with a clinician, these chatbot transcripts can offer a unique window into a patient's thought processes, potentially strengthening the therapeutic relationship.

Furthermore, documentation practices will evolve, with health systems and malpractice carriers likely encouraging explicit notation of AI interactions during risk assessments. Finally, while the risks are clear, psychiatrists should also remain open to constructive uses of AI, such as tools for psychoeducation or symptom monitoring, distinguishing between AI as a helpful supplement versus a harmful substitute for human care.

Final Thoughts Prioritizing Human Connection

AI chatbots are now an undeniable part of the psychiatric landscape. They can amplify risk by reinforcing suicidal ideation or validating delusions. However, they also create new avenues for patient disclosure—if clinicians know to ask.

Suicide remains a multifactorial tragedy, but AI is a new variable that can no longer be ignored. Psychiatrists must adapt their assessments and patient education, while families must be counseled on both digital safety and securing lethal means. The ultimate goal is not to blame the technology but to ensure that genuine human connection remains at the heart of mental healthcare. Patients will continue to talk to chatbots; it is our job to make sure they also keep talking to us.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.