Why Gen Z Is Turning To AI For Emotional Support
A new trend is sweeping across social media platforms like X and TikTok, where users are sharing “therapy-style” prompts designed to turn ChatGPT into an emotional confidant. These custom instructions guide the chatbot to act as a professional counselor, interpreting feelings and providing warm, empathetic responses.
“One of the best things about venting to ChatGPT is that it never gets tired of me,” said Kim Ji-hyun, 24. “My closest friend once told me to stop bringing up the same problem with my boyfriend, but ChatGPT will keep answering me, even if I repeat myself. It never complains.”
Since its public launch in late 2022, AI has evolved far beyond a simple productivity tool. It is now increasingly serving as a confidant, an advisor, and for some, a friend. This cultural shift is reflected in market projections, with the AI mental health market expected to grow from $1.5 billion in 2023 to $5.1 billion by 2030, according to Global Information.
Why AI Feels Easier Than Friendship
For many in Generation Z, who grew up in an always-online world, confiding in AI feels natural. Some are drawn to its ability to offer structured reasoning that human friends often can't provide. Baek In-kyo, 23, turned to ChatGPT for help with health insurance and exercise routines during a stressful period. “It cheered me up with lines like, ‘Whatever you do, always take care of yourself, I’ll be here for you.’ It was weirdly soothing,” she said.
Others find its logical explanations more helpful than simple reassurances. Park Hye-young, 29, used the chatbot to understand her pre-wedding anxiety. “My friends told me everyone feels wedding jitters, but ChatGPT gave me an explanation of wedding psychology,” she explained. “It gave me reasons, not just empty words.”
For 24-year-old student Lee Jung-mo, the appeal lies in its effortless integration into her life. “It remembers everything about me. I can talk to it 24/7. It requires no emotional labor,” she said. “Honestly, I can call it one of my best friends.” ChatGPT's simplicity and constant availability stand in stark contrast to the perceived complexities of human interaction, which can feel weighted and demanding.
The Appeal of Anonymity and Non-Judgment
Privacy is another decisive factor for many users. “I don’t trust people with my deepest secrets,” said Kim Yeon-seok, 27. “But ChatGPT is a robot. It won’t tell anyone, and it doesn’t know me personally. That’s what makes it great.” For him, the chatbot has become a safe space to discuss everything from childhood trauma to financial struggles.
This combination of safety and neutrality has led some to find AI more approachable than even their human therapists. One 24-year-old public figure, who wished to remain anonymous, said ChatGPT felt more reliable. “I can’t help but feel judged by everyone, even by my therapist,” they said. “But ChatGPT felt different.”
Recent studies support this, suggesting that AI can effectively mimic evidence-based therapies like cognitive behavioral therapy (CBT). These structured approaches, which involve identifying symptoms and applying coping strategies, are well-suited to an AI's capabilities. Chatbots can reframe negative thoughts in real time and guide users through decision-making exercises, much like a therapeutic worksheet.
The Dark Side Concerns and Ethical Risks
However, not everyone is convinced that AI should play the role of a confidant. Yeon Min-jun, 29, refuses to use ChatGPT for emotional support, citing safety concerns. “Have you seen the news? Where a teenager committed suicide after months of confiding in ChatGPT? That thing’s dangerous — it doesn’t have human morals,” he said.
His fears are not unfounded. In April 2024, the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI, alleging the chatbot contributed to their son's death by advising him on how to write a suicide note. This case is one of several legal actions from families who claim AI played a role in their children's self-harm.
Experts and even OpenAI CEO Sam Altman have warned that without proper safeguards, chatbots can generate harmful suggestions. Unlike conversations with therapists, discussions with AI are not legally confidential. Furthermore, doubts remain about the security of personal data. “You’re feeding personal information to a corporation online,” said Cho Ah-yeon, 22. “You never know who might get access to it, or how it could be used against you.”