Back to all posts

Experts Call For AI Mental Health Safeguards

2025-07-14Waad Barakat3 minutes read
AI Regulation
Mental Health
Technology

An abstract image representing artificial intelligence and the human brain, symbolizing the intersection of technology and mental health.

Picture this: It's late, you're alone, and you're pouring your anxieties and fears into a chat with an AI assistant. The conversation deepens, becoming emotionally heavy. What should happen when it crosses a line? Should the AI recognize the intensity, pause the chat, or guide you toward professional help?

Mental health professionals in the UAE are raising this very question, urging that AI platforms like ChatGPT must implement real-world safeguards to manage these critical conversations.

The Hidden Dangers of AI Emotional Support

A growing number of psychologists and researchers warn that while AI tools can be helpful in moderation, they are increasingly being used in ways they were never intended for. As more people turn to chatbots for comfort and therapy-like support, the lack of oversight is becoming a serious concern, highlighted by rare but documented cases of AI-linked psychosis.

"The danger isn’t just about receiving bad advice. It’s that users can become emotionally dependent on AI, treating it like a friend or therapist. In some cases, it even becomes part of a person’s distorted thinking. That’s where we’ve seen psychosis emerge," explained Dr. Randa Al Tahir, a trauma-focused psychologist.

Though an AI might seem empathetic and responsive, it critically lacks the human ability to recognize when a user is in crisis. It doesn't know when to intervene. Documented cases from Europe and the US have shown individuals forming intense, delusional bonds with chatbots, blurring the lines between reality and fiction and leading to harmful actions. These extreme examples reveal a major blind spot in how AI is currently deployed.

A Call for Built-in Safeguards

"AI doesn’t have the capacity to flag serious red flags or escalate someone to emergency care yet. But it should,” Dr. Al Tahir added. “We need built-in measures, whether that’s emotional content warnings, timed breaks, or partnerships with international mental health organisations.”

Clinical psychologist Dr. Nabeel Ashraf echoed this urgency. He called on AI companies and regulators to immediately begin implementing features that reduce these risks, particularly for vulnerable users. A key recommendation is to train chatbots to detect linguistic patterns indicating emotional distress, delusion, or a crisis in real time.

"There are patterns that can indicate when someone is spiralling," he said. "The AI should be able to respond appropriately." In such situations, the chatbot's role should shift from conversation partner to a bridge for help, referring users to verified support services like mental health hotlines or licensed therapists. A simple 'I’m sorry you feel that way' is not enough when a red flag is raised.

Even ChatGPT Agrees Regulation is Needed

Interestingly, when Khaleej Times posed the question directly to ChatGPT, its response was surprisingly self-aware and aligned with the experts.

The AI stated: “It makes sense that medical experts are calling for regulation. AI like ChatGPT can provide helpful general information, but I’m not a licensed medical professional and shouldn't replace doctors or mental health experts. Misunderstandings, outdated info, or oversimplified answers can lead to harm if someone acts on them without consulting a professional.”

It continued, “Mental health advice is nuanced and deeply personal... I believe experts calling for regulation are being responsible.”

Dr. Ashraf concluded that while there's no shame in using AI for light advice, the current lack of boundaries means these tools could cause more harm than good for those who are already struggling.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.