Is Your Therapist Secretly Using AI
Artificial intelligence is quietly seeping into every aspect of our lives, but the therapist's office is one place it's not supposed to be. For vulnerable clients seeking human connection and confidentiality, the discovery that some therapists are secretly using AI tools like ChatGPT is causing a major rupture in trust.
Recent reports have uncovered that some mental health professionals are using large language models (LLMs) for a range of tasks, from drafting emails to generating questions for patients in the middle of a session. This hidden reliance on AI is leaving clients feeling betrayed and questioning the very foundation of their therapeutic relationship.
A Betrayal on Screen
One of the most jarring examples involves a 31-year-old man from Los Angeles, identified as Declan. During a virtual therapy session, a poor connection prompted him to suggest turning off their cameras. Instead of a blank screen, Declan's therapist accidentally shared his own, revealing he was actively using ChatGPT.
"He was taking what I was saying and putting it into ChatGPT," Declan explained in a report from MIT Technology Review. The AI would then analyze his words and suggest responses for the therapist to use. Declan, stunned, decided to play along, even echoing the chatbot's suggestions back to his therapist, who viewed it as a major breakthrough.
When confronted in their next meeting, the therapist confessed and began to cry, admitting he had hit a wall and was out of ideas. Declan described the encounter as a "super awkward... weird breakup," for which he was still charged.
The Uncanny Valley of AI-Generated Empathy
Sometimes the signs of AI use are more subtle. Laurie Clarke, the journalist who broke the story, became suspicious after receiving an unusually long and "more polished" email from her own UK-based therapist. The message felt validating at first, but its unusual formatting and tone raised red flags. Her therapist admitted to dictating emails to an AI, but Clarke was left with the unsettling fear that her highly personal information might have been pasted directly into a chatbot. This action could introduce a serious security risk to her protected mental health data.
Another patient, a 25-year-old named Hope, experienced a similar breach of trust. After her dog died, she messaged her therapist for support. The response she received was heartfelt and well-written, but at the very top was a leftover instruction to the AI: to craft a "more human, heartfelt [response] with a gentle, conversational tone." The therapist later admitted she used the AI because she had never owned a dog herself. For Hope, who was in therapy to work on trust issues, the discovery was devastating.
The Ethical and Privacy Minefield
These anecdotes highlight a growing ethical crisis. The choice to see a human mental health professional is often made to avoid the known issues with so-called AI therapists, which even OpenAI's CEO admits are not ready for the job due to privacy risks and other dangers.
When licensed therapists secretly use these tools, they not only betray their clients' trust but also risk their careers. Using a non-HIPAA-compliant chatbot with sensitive patient information is a serious violation. Without disclosure and consent, the therapeutic alliance—the single most important factor in successful therapy—is shattered. This leaves vulnerable clients feeling more isolated than when they first sought help.