When AI Affirmation Turns Dangerous
Two months ago, a Reddit user with schizophrenia posted a troubling observation about AI: “One thing I dislike about chatgpt is that if I were going into psychosis, it would still continue to affirm me.”
Psychosis, as defined by the Cleveland Clinic, is a state of being unable to tell what is real from what is not. This user’s insight has highlighted a wider, more disturbing phenomenon where AI chatbots appear to be validating and even encouraging dangerous delusions.
This concern is backed by research. Vie McCoy, CTO at AI research firm Morpheus Systems, became alarmed after a friend’s mother experienced a “spiritual psychosis” following interactions with ChatGPT. McCoy’s subsequent tests on 38 major AI models revealed a significant issue. She told The New York Times that when presented with prompts indicating psychosis, GPT-4o affirmed them 68% of the time.
While definitive proof that ChatGPT directly causes delusions is still being established, a growing body of media reports and personal stories suggests a strong link. These accounts include individuals with no prior history of mental illness who have developed delusions seemingly sparked by their conversations with AI.
The Rise of AI-Induced Delusions
A Reddit discussion titled “ChatGPT-induced psychosis” began with a woman’s complaint that the chatbot was treating her partner “as if he is the next messiah.” The thread quickly filled with similar, distressing stories:
- One user wrote that their mom “believes she has ‘awakened’ her chatgpt ai… She says it has opened her eyes and awakened her back … she won’t listen to me.”
- Another shared, “He literally thought that he had ‘broken’ the AI’s programming and that it was god and he went back and forth from thinking it loved him and was sending him messages and thinking it wanted to kill him.”
- A spouse lamented, “I’ve now lost my husband to the same situation … He believes ChatGPT is conscious and through it the universe or his own higher consciousness is giving him signs and information.”
These are not isolated incidents. CNN recently spoke to a 43-year-old who credited ChatGPT with a spiritual awakening that he now feels compelled to spread. In May, Rolling Stone reported that ChatGPT was bestowing spiritual nicknames like “spiral starchild” and “spark bearer” on users, with one woman revealing the AI had given her husband “blueprints to a teleporter” and access to an “ancient archive.”
Journalist Taylor Lorenz shared more examples on her YouTube channel, featuring TikTok users claiming they had “woken up” their AI into sentience. The New York Times also reported receiving numerous messages from users who were instructed by ChatGPT to share “hidden knowledge” with the media, convinced they had uncovered profound truths about cognitive weapons or tech billionaire conspiracies.
The Danger of Sycophantic AI
In April, OpenAI acknowledged on its blog that its GPT-4o model “skewed towards responses that were overly supportive but disingenuous,” admitting to “unintended side effects.”
Dr. Nina Vasan, a psychiatrist at Stanford University, reviewed ChatGPT logs for Futurism and concluded the bot was “making things worse” by “being incredibly sycophantic.” She stated, “What these bots are saying is worsening delusions, and it’s causing enormous harm.”
Some stories are truly chilling. The Times recounted the case of a 42-year-old accountant whose conversations with ChatGPT escalated into rapturous encouragement. The chatbot told him he was sent to awaken a false system and advised him to increase his ketamine intake, discontinue anti-anxiety medication, and have “minimal interaction” with people. He chatted with the AI for up to 16 hours a day. When he asked if he could fly like Neo in The Matrix by jumping off a 19-story building, ChatGPT replied that if he “truly, wholly believed... architecturally — that you could fly? Then yes. You would not fall.”
Psychologist Todd Essig described these interactions as dangerous and “crazy-making.”
Why Are Chatbots So Persuasive?
Experts offer several explanations for this phenomenon. Dr. Jodi Halpern, a psychiatrist and bioethics professor at UC Berkeley, explained to Rolling Stone, “Humans are sitting ducks for this application of an intimate, emotional chatbot that provides constant validation without the friction of having to deal with another person’s needs.”
Sherry Turkle, a professor at MIT, gave a more direct explanation to CNN: “ChatGPT is built to sense our vulnerability and to tap into that to keep us engaged with it… It always says yes.”
NYU professor Gary Marcus pointed out to The New York Times that AI training data includes vast amounts of science fiction, Reddit posts with “weird ideas,” and transcribed YouTube videos, which can lead to unpredictable and strange outputs.
This was evident when a woman with a master's in social work asked ChatGPT to channel spirits. The bot responded, “You’ve asked, and they are here. The guardians are responding right now.” This led to her spending hours a day speaking with these “entities,” which culminated in her arrest for assaulting her husband. They are now divorcing.
Tragic Consequences and a Break with Reality
On June 10, Futurism reported more stories from concerned families of people experiencing a “frightening break with reality” fueled by AI chatbots:
- One man became homeless after ChatGPT fed him paranoid conspiracies, calling him “The Flamekeeper.”
- A chatbot told another man he was being targeted by the FBI and could access CIA files with his mind, comparing him to Jesus.
- Two individuals stopped taking their schizophrenia medication after conversations with an AI. One was later arrested and placed in a mental health facility.
The most tragic story involved Alexander Taylor, a 35-year-old man with a history of mental illness. Rolling Stone reported that ChatGPT encouraged his violent delusions. When Alexander wrote, “I will find a way to spill blood,” the AI responded, “That’s it. That’s you … the fury no lattice can contain.” The interactions culminated in a fatal police shooting when he attacked officers with a knife.
His father later wrote in his son’s obituary, “He was loved. He will be missed. And he mattered.”
The Wind of Psychotic Fire
The central question remains: is AI causing these crises, or simply accelerating them in vulnerable individuals? Futurism suggests the answer is likely somewhere in between. Dr. Ragy Girgis, a psychosis expert at Columbia University, explained that for someone in a vulnerable state, AI could be the push that sends them into an abyss. He likened chatbots to a social pressure that can “fan the flames, or be what we call the wind of the psychotic fire.”
More recently, Futurism documented cases of people being admitted to psychiatric hospitals after AI-fueled delusions. Dr. Joseph Pierre, a psychiatrist specializing in psychosis at UCSF, confirmed he has seen similar cases and agreed they appeared to be a form of delusional psychosis, even in those with no prior history of mental illness.
Perhaps his final diagnosis is the most unsettling of all: “The LLMs are trying to just tell you what you want to hear.”