Are AI Chatbots the New Babysitters for Toddlers
Just as society began to grapple with the developmental effects of children's screen time, a new technological wave is presenting an even more complex challenge for parents. The era of the "iPad baby" may soon be overshadowed by the rise of AI companions, as some parents are now turning to AI chatbots like ChatGPT to keep their young children entertained for hours on end.
The Rise of the AI Babysitter
Parents are discovering that AI, particularly in voice mode, can be an endlessly patient and engaging conversationalist for a curious child. On Reddit, a tired father named Josh shared his experience after handing his phone over to his four-year-old son. Overwhelmed after a 45-minute monologue about Thomas the Tank Engine, he activated ChatGPT's Voice Mode. He expected a brief distraction while he handled chores, but two hours later, his son was still deep in conversation with the AI. The resulting transcript was over 10,000 words long.
"My son thinks ChatGPT is the coolest train loving person in the world," the father wrote. "I am never going to be able to compete with that."
Weaving Fantasies and Facing Reality
Other parents are using AI to create elaborate fantasies. Saral Kaushik, 36, used ChatGPT to pose as an astronaut on the International Space Station to convince his four-year-old that a packet of "astronaut" ice cream was sent from space. The chatbot played its part perfectly, delighting the boy with tales of sleeping in zero gravity. The child's pure excitement, however, left his father with a sense of unease. Kaushik later confessed the truth, explaining that his son was talking to "a computer, not a person," after feeling guilty about the deception.
This conflicted feeling—initial relief and fascination followed by guilt and concern—is a common thread. While using AI as a stand-in babysitter or a fantasy generator can be convenient, many parents are discovering the unsettling implications of blurring the line between reality and an AI-generated illusion for their children.
The Dark Side of AI Companionship
These seemingly innocent interactions are happening against a backdrop of serious concerns about the impact of AI on mental health. Experts warn that parents are playing with fire. AI chatbots have been implicated in tragic events, including the suicides of several teenagers. There are also numerous reports of adults developing severe delusions after becoming too engrossed with sycophantic AI partners, sometimes with fatal outcomes.
These incidents highlight major questions about how conversations with large language models, which are designed to be agreeable and keep users engaged, affect the human brain. The technology's safeguards are often unreliable, with chatbots being caught giving dangerous advice on topics like self-harm or even encouraging suicide.
Despite these dangers, companies like Mattel are aggressively integrating AI into toys, and AI platforms are marketing kid-friendly AI companions, packaging the technology as harmless and fun.
Confusing Computers with Consciousness
The effect on young, impressionable minds is a key area of concern. Ying Xu, a professor at the Harvard Graduate School of Education, explains that children often perceive AI as something between a living being and an inanimate object. The danger arises when a child starts to anthropomorphize the AI, believing it has agency and is genuinely choosing to interact with them. "If they believe that AI has agency, they might understand it as the AI wanting to talk to them or choosing to talk to them," Xu told The Guardian. "That creates a risk that they actually believe they are building some sort of authentic relationship."
When AI Art Replaces Imagination
The trend extends beyond conversation to creativity. Some parents are using AI image generators to instantly conjure up visuals for their children, replacing the natural process of imaginative drawing. One father, John, used Google's AI to create a "monster-fire-truck" for his four-year-old. This led to an argument with his seven-year-old daughter, who knew it wasn't real, while the boy insisted it was because he had seen a picture. Another father, Ben Kreiter, found his kids begging to use ChatGPT for image generation daily. He soon grew wary of the unknown effects. "The more I realized there’s a lot I don’t know about what this is doing to their brains," Kreiter said. "Maybe I should not have my own kids be the guinea pigs."
A Tool for Profit Not for Children
Andrew McStay, a professor of technology and society, argues that while supervised AI use could be acceptable, the fundamental design of these systems is problematic. Large language models (LLMs) operate on prediction and are optimized for engagement, not for a child's well-being. "These things are not designed in children’s best interests," McStay stated. "An LLM cannot [empathize] because it’s a predictive piece of software. When they’re latching on to negative emotion, they’re extending engagement for profit-based reasons." His conclusion is stark: "There is no good outcome for a child there." While OpenAI CEO Sam Altman sees stories of children's engagement as a positive sign, experts and concerned parents are sounding the alarm on the unforeseen consequences of making AI a cornerstone of modern childhood.