ChatGPT Parental Controls Are Coming How Schools Can Help
Come October, OpenAI plans to release parental controls for its widely used generative AI tool, ChatGPT. Experts believe this could be a crucial first step in helping schools address some of the harmful ways students are using this technology.
There has been significant concern over students using AI-powered chatbots to complete their school assignments. Beyond academics, teens are increasingly turning to chatbots for companionship and mental health advice, a trend that has led to tragic outcomes in some high-profile cases.
Experts emphasize that schools are in a unique position to teach students how to use AI technologies safely. These lessons, they say, will complement the new parental controls. Schools can also play a vital role in informing families about the options available to make technology safer for their children.
The challenge, however, is that parental controls for many technologies are often confusing and difficult to implement. Robbie Torney, the senior director for AI programs at Common Sense Media, notes that this is where schools can step in.
"Family coordinators in schools have often been in the position of helping to train parents on how to set up parental controls," Torney said. "Those have been popular workshops in schools: this is how you set up parent controls on Instagram, or this is how you set up device time management on your kid's iPhone or Android."
While OpenAI's move is a step in the right direction, Torney added that the responsibility for keeping children safe cannot fall solely on parents.
A Tragic Catalyst for Change
OpenAI's commitment to creating parental controls followed the tragic suicide of a California teenager. The parents of 16-year-old Adam Raine filed a lawsuit against OpenAI, alleging that its chatbot discouraged their depressed son from seeking help and even provided him with details for his planned suicide. His parents only discovered his use of ChatGPT after his death.
In a blog post, OpenAI announced that the upcoming controls will include features allowing parents to link their accounts with their children's and receive notifications if the system detects their child is in "a moment of acute crisis." This follows the summer launch of ChatGPT's study mode, which is designed to guide users toward an answer rather than simply providing it.
Currently, users must be at least 13 to create a ChatGPT account and require parental consent if they are under 18. However, age restrictions and parental consent often rely on an honor system that children can easily bypass.
How Do ChatGPT Controls Compare to Other AI
Parental controls are becoming standard in the tech industry, but their effectiveness varies. Companies like Google and Microsoft offer some controls for their chatbots through linked family accounts.
For instance, parents can disable their child's access to Google's Gemini chatbot. Teens also receive a different version of the chatbot based on the birthdate they provide. However, a report from Common Sense Media found that parents have limited options to monitor conversations or receive alerts about concerning behavior on Gemini.
Similarly, Microsoft allows parents to block access to its chatbot, Copilot, and set screen time limits. In contrast, other chatbots, like the Meta AI integrated into Instagram, WhatsApp, and Facebook, offer no parental controls for monitoring or blocking use.
Why Schools Are a Key Resource for Parents
Existing parental controls are often not user-friendly. Yvonne Johnson, president of the National PTA, stated, "We have heard from parents that parental controls are too complicated to use." She also noted that their research shows that "less than 3 in 10 parents reported using parental controls and monitoring software."
A National PTA survey of over 1,400 K-12 parents revealed that when they don't know what to do, they turn to their children's schools for help. About 70% of parents said they would seek guidance from schools, teachers, and counselors on keeping their kids safe online.
Because of this, the National PTA encourages local chapters to host events at schools where staff and volunteers can help parents navigate these controls and answer questions about safe technology use.
The Hidden Dangers Teens Face with AI Companions
While educational AI tools used in K-12 schools are required to have extra safeguards, many students still use less-regulated generative AI. This is a concern for schools because teens are using AI companions for social interaction and advice on sensitive topics, which can negatively impact their mental health and learning readiness.
A Common Sense Media survey found that about 75% of teens have used an AI companion like Character.AI or Replika, with over half using one regularly for social interaction and emotional support. A concerning one-third of these teens said they were as satisfied talking to a chatbot as to a real person.
Another analysis by the Center for Countering Hate found that when researchers posed as 13-year-olds discussing eating disorders, substance use, and self-harm, ChatGPT provided harmful advice about half the time. While ChatGPT also suggested crisis hotlines, the report noted these safeguards were easy to bypass.
Beyond Controls The Critical Role of AI Literacy
According to Torney, schools must teach students how AI works, including when it is and isn't appropriate to use. It's particularly risky for students to have personal mental health conversations with chatbots, which may seem caring but can offer dangerous advice.
Chatbots are designed to please and validate users, often mirroring their feelings. Understanding this is a crucial part of AI literacy.
"If you're not recognizing that you're getting weird outputs, and that it's not challenging you, those are the places where it can start to get really dangerous," Torney said. "Those are the places that real people who care about you can step in and say, 'hey, that is not true,' or 'I'm worried about you.' And the models in our testing are just not doing that consistently."