Back to all posts

Therapists Are Secretly Using ChatGPT On Their Patients

2025-09-09James O'Donnell3 minutes read
AI Ethics
Mental Health
Technology

In the future envisioned by Silicon Valley, empathetic AI models could serve as therapists for millions, unburdened by the human need for degrees, insurance, or even sleep. However, the current reality is proving to be far more chaotic and concerning.

The Shock of AI in the Therapy Room

A recent investigation has brought to light a disturbing trend: some therapists are secretly using ChatGPT during patient sessions. The discoveries have been jarring. In one instance, a therapist conducting a virtual appointment accidentally shared his screen, revealing to the patient that their most private thoughts were being fed directly into ChatGPT. The AI model then generated responses that the therapist simply repeated back to the patient.

Stephanie Arnett/MIT Technology Review | Adobe Stock Stephanie Arnett/MIT Technology Review | Adobe Stock

This clandestine use of general-purpose AI is a stark contrast to the careful, clinical application of purpose-built therapeutic bots. While one clinical trial for a specialized AI therapy bot has shown promising results, the unvetted use of tools like ChatGPT introduces a host of ethical problems.

A Breach of Trust: The Need for Disclosure

According to Laurie Clarke, who broke the story, therapists in these cases had not disclosed their AI usage to patients. This lack of transparency is the core issue. When the use of AI is discovered, it can appear deceptive and irrevocably damage the essential trust between a therapist and their patient. The consensus is clear: if therapists plan to use AI, they must disclose how and why to their patients beforehand to avoid raising uncomfortable questions and destroying the therapeutic relationship.

Why Are Therapists Turning to AI?

The motivations for using AI vary. Some therapists are interested in its potential to save time on administrative tasks, such as transcribing session notes. However, most professionals remain highly skeptical about using AI for clinical advice. The preferred method for complex cases is to consult with human supervisors, colleagues, or established case studies.

There is also a significant concern about inputting sensitive patient data into large language models. While specialized AI can be effective in delivering standardized treatments like Cognitive Behavioral Therapy (CBT), these are tools designed and vetted for that specific purpose. General-purpose models like ChatGPT are not.

As this practice emerges, professional bodies and lawmakers are beginning to respond. The American Counseling Association currently advises against using AI tools to diagnose patients. On the legislative front, states are taking action. Nevada and Illinois, for instance, have recently passed laws prohibiting the use of AI in therapeutic decision-making, and it is likely that other states will enact similar regulations.

Tech's Vision vs. Therapeutic Reality

Tech leaders seem to be encouraging the public to see AI in a therapeutic light. OpenAI's Sam Altman recently noted that many people use ChatGPT as a therapist, viewing it as a positive development. This perspective, however, overlooks a fundamental aspect of genuine therapy.

Effective therapy is not merely about receiving soothing and validating responses. It is often a difficult and uncomfortable process where a therapist challenges a patient, helps them explore difficult emotions, and seeks a deeper understanding. ChatGPT, by its nature, is not designed to do this. It provides agreeable answers rather than the challenging insights that lead to real growth, creating a significant gap between the tech industry's promises and the complex reality of mental healthcare.

Read the full investigation by Laurie Clarke for more details.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.