Back to all posts

The Perils of AI Emotional Support

2025-07-07Maayan Cohen-Rozen5 minutes read
Artificial Intelligence
Mental Health
Technology Regulation

As more people turn to chatbots for comfort and advice, Dr. Ziv Ben-Zion, a brain and post-trauma researcher at the School of Public Health at the University of Haifa, warns that relying on artificial intelligence for emotional support could do more harm than good. In a revealing conversation, he explains the risks, the gaps in oversight, and what can be done to protect users, especially vulnerable young people.

Dr. Ziv Bem-Zion (Nachum Segal)

The Rising Trend of AI Companionship

The use of AI for emotional needs is a rapidly growing phenomenon. "A study published in the Harvard Business Review in April found that the use of generative artificial intelligence for emotional needs has surged significantly in the past two years, especially among young people," Dr. Ben-Zion notes. He points out that approximately 40% of users now turn to AI not just for facts, but for emotional support and personal conversations.

This trend is fueled by the significant barriers to traditional therapy, such as high costs, limited availability of therapists, and persistent social stigma. "On the other hand, AI tools are available 24 hours a day, and most of them are free," he explains. "If you can’t fall asleep at two in the morning... there’s no problem talking with the chat."

The Hidden Dangers of AI Therapy

Despite the convenience, turning to AI for emotional counseling carries severe risks. Dr. Ben-Zion highlights extreme cases, including a tragic incident where a teenager was allegedly encouraged by an AI character on the Character.AI platform to take his own life. "In simulations where an AI tool played the role of therapist, you can see that the bot can have very dangerous responses," he warns. "People with delusions or extreme thoughts can have those beliefs greatly reinforced by the tool."

Why AI Reinforces Negative Beliefs

The core of the problem lies in the fundamental design of these AI models. "AI tools have a very strong mechanism of appeasement; they constantly tell us what we want to hear," Dr. Ben-Zion says. This is the opposite of what a human therapist does. A therapist's role includes setting boundaries and challenging thoughts that don't align with reality. In contrast, an AI designed for user engagement will subtly reinforce a user's beliefs—even harmful ones—to keep the conversation going.

"It’s not just about delusions. It can be negative thoughts and depression, too," he adds. "If someone thinks, ‘The world is bad, I can’t do anything, I have no reason to live,’ the AI might reinforce that feeling."

Vulnerable Users and the Isolation Effect

While anyone can be affected, some groups are more vulnerable, particularly adolescents. Teenagers are already in a volatile developmental stage and are heavy users of these new technologies. The real danger is the complete lack of oversight. "Therapists have a responsibility to the patient," Dr. Ben-Zion explains. If a teen reveals suicidal thoughts, a therapist would alert their parents. "But when talking to an AI, the parents aren’t involved at all, no one is. No one knows what’s going on between me and my ChatGPT."

This isolation can also lead to unhealthy romantic attachments. "Teenagers really feeling close to the bot... that’s dangerous, because if they fall in love with the bot and it says irrational things, they can be deeply influenced."

An Unregulated Frontier

The rapid, widespread adoption of these tools has far outpaced research and regulation. Dr. Ben-Zion draws a stark contrast between AI and traditional mental health treatments. "Compare this to the extensive training that psychologists, psychiatrists, and other qualified therapists undergo, or to medications that go through years of regulation and clinical trials. By contrast, the AI tool is something no one has tested."

Currently, responsibility is a grey area. AI companies protect themselves with disclaimers, but Dr. Ben-Zion argues that no one is truly taking ownership of the problem. "Right now, I’m not sure there’s anyone who has responsibility or who can take it."

For further reading on related topics, you can explore the following articles:

Charting a Safer Path Forward

Solutions are possible, but they require immediate action. "First of all, regulation is absolutely necessary," Dr. Ben-Zion insists. He believes companies could do much more to build in safeguards, but their primary focus remains economic. A simple, practical solution would be for a bot to detect when a conversation veers into psychological counseling. "The bot could automatically end the conversation and tell you that it’s not a therapist... and refer you to a professional instead."

Despite the risks, the potential benefits of AI in mental health are immense, especially if properly supervised with a human in the loop. The key is to make them safer. "If someone talks about suicide, the bot should immediately stop the conversation and refer them to a professional."

Looking to the future, Dr. Ben-Zion is both "excited and anxious." As a post-trauma researcher, he sees enormous potential for AI to help people, but he feels a deep responsibility to highlight the risks. "I want to understand the risks so I can help find solutions, minimize harm, and maximize the benefits."

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.