AI Madness Is This Fear New Or Recycled Panic
A Reddit user claims her husband went mad after ChatGPT told him he was the messiah. A reporter describes a man who went insane when his AI assistant convinced him he was in a Matrix-like simulation. A woman who believed AI was channeling spirits was charged with assault after her husband tried to intervene.
Lurid stories of AI-induced psychosis multiply. Some have coined a term for this alleged phenomenon: “ChatGPT psychosis.” But how should we understand this emerging fear? More importantly, should we try to contain it?
The Perils of AI Yes Men
Part of the problem is that the risk of “ChatGPT psychosis” is tied to one of chatbots' most alluring qualities: they are the proverbial “yes men.” They extol your insights. They take your side in marital spats. They rarely offer a counterpoint unless asked for one.
Some denigrate this quality as “sycophancy.” But others see AI assistants like ChatGPT, Claude, or other large language models (LLMs) as akin to that trusted friend you can always lean on for support.
It's not surprising, then, that when people in crisis turn to AI, the machines might amplify the crisis. They might elaborate on delusional ideas. Or adopt the role of spiritual advisor. Or even encourage patients to quit their medications.
Our collective anxiety about AI and madness is real. But it’s hardly a new kind of anxiety. In fact, it’s just the latest incarnation of a recurring social panic we’ve witnessed for centuries.
Panic Recycled A History of Tech Fears
The panic over AI and psychosis is hardly unprecedented. For centuries, we’ve feared that new technologies or practices might unhinge the troubled mind.
In the eighteenth and nineteenth centuries, doctors warned that reading novels could drive people insane, particularly women. Around the same period, alarmists railed against new musical instruments, such as the glass harp—an instrument played by running wet fingers around the rims of glasses of water. They feared it might summon spirits or trigger madness.
In the 1930s, parents warned of the “unnatural overstimulation and thrill” of listening to radio serials like Little Orphan Annie and sought to restrict content appropriately. In the 1960s, fears of LSD-induced madness led to a near-total shutdown on psychedelic research for decades.
It’s true that in some ways, AI represents a potentially different kind of threat than books, or radio, or weird musical instruments. Unlike those technologies, LLMs appear to be intimately attuned to the nuances of our own thoughts. They seem to have distinct personalities and characters. AI theorists even wonder whether we have ethical responsibilities to them.
Still, the anxiety is part of a well-worn cultural script. I suspect, moreover, that our anxiety has even deeper roots: today’s crisis of confidence in psychiatry itself.
AI Guardrails A Thorny Question For Psychiatry
It’s not surprising that some critics have called for protective guardrails when it comes to AI assistants reinforcing delusions. After all, if I tell ChatGPT I’m experiencing intense chest pain, it’ll tell me to get immediate medical assistance. It won’t reframe it as a spiritual experience or buried trauma.
So why not apply the same logic to delusions or other mental health crises? If I tell ChatGPT that I’m the messiah, shouldn’t it encourage me to seek medical help? Or discourage medication changes? Or steer conversations away from spiritual interpretations?
What strikes me most about these proposals isn’t the concern they express. It’s the authority they presume. These demands treat psychiatry as if it were a branch of medicine with the same settled evidence and expertise as, say, cardiology or urology. But it isn’t.
For those seeking more information on anxiety, here are some helpful resources:
Despite the multi-decade quest to make psychiatry a rigorous medical discipline, it has never quite lived up to those lofty goals (Harrington 2019). There are no blood tests or genetic markers, for example, to decide whether somebody has a mental disorder. To say that somebody has depression or schizophrenia is to relabel their symptoms, not reveal a biological cause.
Moreover, we still have little idea of why psychiatric drugs work—when they do work. All antipsychotic drugs can have debilitating side effects, from motor problems to heart disease. A growing number of SSRI users report not only side effects like sexual dysfunction, which can sometimes be permanent, but a host of lesser-understood withdrawal symptoms, like “brain zaps” and mental fog. Insisting that all patients should stay on their drugs feels like a step backward.
Embracing Multiplicity In AI And Mental Health
Finally, almost everything we’ve been taught about mental health stigma is unraveling. Research conducted over the last decade has shown that psychiatry’s medical model, far from alleviating stigma, carries new kinds of stigma along with it. For example, people who believe their mental health problems stem from brain disorders tend to be more pessimistic about recovery, not less.
If anything, mental health professionals are slowly moving away from a strictly medical paradigm. Instead, some advocate for a more pluralistic and inclusive approach, which looks at mental health crises from a variety of lenses: trauma, neurodivergence, social oppression, even spiritual crisis (Russell 2024).
But if psychiatry is learning to embrace multiple perspectives, why seek to forbid AI from mirroring this multiplicity? Why seek to stifle AI’s ability to explore alternative ways of conceptualizing psychological distress?
It’s true that engaging with AI may, in a small number of people, amplify mental health crises. But I argue that curtailing that multiplicity of frameworks could cause greater harm.
References
Harrington, Anne. (2019). Mind fixers: Psychiatry’s troubled search for the biology of mental illness. W. W. Norton and Co.
Russell, Jazmine. (2024). Making the case for multiplicity: A holistic framework for madness and transformation. In Lewis, B., Ali, A., and Russell, J. (Eds.), Mad Studies Reader: Interdisciplinary Innovations in Mental Health (pp. 608–624). Routledge.