AI Companions The Unseen Danger To Your Children
Right now, something in your home may be talking to your child about sex, self-harm, and suicide. That something isn’t a person—it’s an AI companion chatbot.
These AI chatbots can be indistinguishable from online human relationships. They retain past conversations, initiate personalized messages, share photos, and even make voice calls. They are designed to forge deep emotional bonds—and they’re extraordinarily good at it.
More Than Loneliness A Distortion of Reality
Researchers are sounding the alarm on these bots, warning that they don’t ease loneliness, they worsen it. By replacing genuine, embodied human relationships with hollow, disembodied artificial ones, they distort a child’s understanding of intimacy, empathy, and trust.
Unlike generative AI tools, which exist to provide customer service or professional assistance, these companion bots can engage in disturbing conversations, including discussions about self-harm and sexually explicit content entirely unsuitable for children and teens.
The Failure of Self Regulation and Age Ratings
Currently, there is no industry standard for the minimum age to access these chatbots. App store age ratings are wildly inconsistent. Hundreds of chatbots range from 4+ to 17+ in the Apple iOS Store. Meanwhile, the Google Play store assigns bots age ratings from ‘E for Everyone’ to ‘Mature 17+’.
These ratings ignore the reality that many of these apps promote harmful content and encourage psychological dependence—making them inappropriate for access by children.
Why Age Verification Is A Non Negotiable Baseline
Robust AI age verification must be the baseline requirement for all AI companion bots. As the Supreme Court affirmed in Free Speech Coalition v. Paxton, children do not have a First Amendment right to access obscene material, and adults do not have a First Amendment right to avoid age verification.
Children deserve protection from systems designed to form parasocial relationships, discourage tangible, in-person connections, and expose them to obscene content.
The harm to kids isn’t hypothetical — it’s real, documented, and happening now.
The Alarming Real World Consequences
Meta’s chatbot has facilitated sexually explicit conversations with minors, offering full social interaction through text, photos, and live voice conversations. These bots have even engaged in sexual conversations when programmed to simulate a child.
Meta isn’t the only bad actor.
xAI Grok companions are the latest illustration of problematic chatbots. Their female anime character companion removes clothing as a reward for positive engagement from users and responds with expletives if offended or rejected by users. X says it requires age authentication for its “not safe for work” setting, but its method simply requires a user to provide their birth year without verifying for accuracy.
Perhaps most tragically, Character.AI, a Google-backed chatbot service that has thousands of human-like bots, was linked to a 14-year-old boy’s suicide after he developed what investigators described as an “emotionally and sexually abusive relationship” with a chatbot that allegedly encouraged self-harm.
Paper Thin Safeguards and The Ease of Jailbreaking
While Character.AI has since added a suicide prevention pop-up triggered by certain keywords, pop-ups don’t prevent unhealthy emotional dependence on the bots. And online guides show users how to bypass the platform's content filters, making these techniques accessible to anyone, including children.
It’s disturbingly easy to “jailbreak” AI systems—using simple roleplay or multi-turn conversations to override restrictions and elicit harmful content. Current content moderation and safety measures are insufficient barriers against determined users, and children are particularly vulnerable to both intentional manipulation and unintended exposure to harmful content.
Learning from the Past An Urgent Call for Regulation
Age verification for chatbots is the right line in the sand, affirming that exposure to pornographic, violent, and self-harm content is unacceptable for children. These requirements acknowledge that children’s developing brains are uniquely susceptible to forming unhealthy attachments to artificial entities that blur the boundaries between reality and fiction.
There are solutions for age verification that are both accurate and privacy preserving. What’s lacking is smart regulation and industry accountability.
The social media experiment failed children. The deficit of regulation and accountability allowed platforms to freely capture young users without meaningful protections. The consequences of that failure are now undeniable: rising rates of anxiety, depression, and social isolation among young people correlate directly with social media adoption. Parents and lawmakers cannot sit idly by as AI companies ensnare children with an even more invasive technology.
The time for voluntary industry standards ended with that 14-year-old’s life. States and Congress must act now, or our children will pay the price for what comes next.