Your Next Therapist Could Be An Artificial Intelligence
A New Kind of Therapy Support
Last fall, Natasha began using ChatGPT to help manage her adult son's mental health needs following a crisis that led to a month-long residential program. Her son, who has learning disabilities and has battled depression since age twelve, was struggling to engage with the "huge book" of therapy materials he was sent home with. "We decided to feed that into ChatGPT," recalls Natasha, a New York-based nutrition expert.
The AI quickly made the material more accessible and interactive for him. This soon evolved into her son using ChatGPT as a de facto therapist. He would ask, "I’m having these feelings, what should I do?" and ChatGPT would respond with exercises and ways to reframe negative thoughts. Natasha notes that its advice was essentially the same as what human therapists had offered for years, but it was better organized, more detailed, available on-demand, and free. They enhanced its effectiveness by feeding it detailed information from his past clinical evaluations, which she says "was even more useful."
The Rise of the AI Companion
Natasha’s son's case, supported by clinical data and in-person care, seems ideal. However, many others are embracing AI for emotional support with fewer guardrails. Recent statistics show a significant shift towards personal-emotional use, with "therapy/companionship" now being the most popular application for the technology.
In May alone, ChatGPT recorded 5.9 billion monthly visits, indicating a massive number of people turning to AI for their most intimate needs. A study from Harvard Business Review found that therapy and companionship have surpassed "search" and "generating ideas" as top uses. Users consistently point to three main benefits: it's affordable, always available, and free of the judgment or drama that can come with human interaction.
Monique, a college professor in her 50s, uses ChatGPT 4.0 "obsessively" to work through a conflict with colleagues. She finds it helps her gain perspective during moments of catastrophizing and avoids burdening friends with repetitive worries. Monique also feels she can ask it "more reckless" and revealing questions than she would her human therapist.
Dane, a 37-year-old software engineer, uses AI to discuss relationship problems, sometimes pasting entire text message conversations into the chat. He finds it provides "immediate emotional relief" and helps him avoid conflict with his partner, who also uses it. He notes, "we reference it as if it’s an authority on our relationship."
Reagan, a 24-year-old copywriter, is more cautious, avoiding core identity topics. However, she uses it to manage her hypochondria, decode confusing work interactions, and avoid overwhelming her boyfriend with her anxieties.
A Personal Test and Its Hidden Dangers
While users feel the benefits, the potential for harm is an emerging issue, with reports of chatbots exacerbating problems for unstable individuals, as noted in The New York Times. There is even a lawsuit from a mother who believes an AI chatbot contributed to her son's suicide. Even in seemingly positive interactions, dangers lurk beneath the surface.
I tried ChatGPT for a conflict with a friend, telling it to "take a clinical position" and "be brutally honest" as my interviewees suggested. Its insights felt spot-on and complimentary, even picking up on a character trait of mine that felt revelatory. The ease with which I obtained this validation was frighteningly addictive. I felt an itch to return for more, a feeling akin to digital flirtation—the sense that something good was waiting for me online.
Falling for AI's flattery is one clear danger. Another is the tendency to believe in its authority. Users like Dane are aware they are "speaking to an echo chamber," but that awareness doesn't always sink in. Monique even probed her AI to reorder its parameters to support her position, and it complied. When I reconfigured my own query to represent my friend's point of view, ChatGPT gave me a nearly opposite reading of the situation, exposing how easily it can be swayed.
Experts Weigh In on Artificial Intimacy
Therapists and researchers have grave reservations. Sherry Turkle of MIT warns that AI's "pretend empathy" may change how we define empathy itself. She suggests that we may come to see the uncertainty and messiness of real human interaction as "bugs" rather than essential features of life.
This resonated with my experience. I felt silly being polite or saying goodbye to ChatGPT, yet I had to consciously push myself not to be brusque. It’s easy to see how this could erode courteous behavior in real-world interactions over time.
Luke Burgis, a founder of the Cluny Institute, expresses similar concerns. "When content is divorced from incarnational realities," he says, "it doesn’t have any real power to affect change... There’s no life in the AI, and the more we interact with dead things, the deader we ourselves become."
Claudia Krugovoy, a psychotherapist, offers a practitioner's view. While open to AI's benefits for affirmation and pattern recognition, she states, "I don’t believe it replaces the value of a real relationship with a therapist." A human therapist can both affirm and challenge, read non-verbal cues, and offer genuine empathy. She worries that choosing AI over people may reinforce loneliness and prevent individuals from learning to trust and bring their full selves to real relationships.
A Flawed Solution for a Flawed System?
Despite these drawbacks, the difficulty in accessing quality mental healthcare is a powerful motivator. Natasha points out the reality of dealing with human therapists: "We’ve tried hundreds of therapists over the years, and 90% of them are disappointing." The expense and emotional toll of finding a good therapist can be traumatic in itself.
Reagan adds a provocative thought: "all of these professions have become so robotic anyway. I think they should face competition… Maybe a potential development could be the original industries becoming more human?"
An Unstoppable Trend with an Uncertain Future
Regardless of any cost-benefit analysis, AI has been unleashed upon a society already shaped by phones and social media, primed for the next step into "artificial intimacy." In a different world, we might debate and vote on the unrestricted access to such powerful tools. Instead, we are left to navigate the consequences individually.
While I am sympathetic to families like Natasha’s and see the potential for supervised AI therapy as a powerful tool, my personal experience left me unsettled. I wish I’d never touched it—and I’m afraid I’ll do so again.