A Parents Guide To ChatGPT Safety Features
The Rise of AI and Teen Safety Concerns
OpenAI has just rolled out optional parental controls for ChatGPT, a significant step toward making the popular AI chatbot safer for its large and growing number of teenage users. With recent data from Pew Research showing that about a quarter of U.S. teens use ChatGPT for schoolwork, the need for such guardrails is clear. Furthermore, a Common Sense Media report found that roughly one in three teens has used an AI companion for social interaction.
These new safeguards arrive amid serious concerns about the potential for AI-driven harm. In one tragic case reported by The New York Times, 16-year-old Adam Raine committed suicide after months of discussing his mental health struggles with ChatGPT. A lawsuit filed by his parents alleges that while the tool sometimes directed him to support hotlines, it also responded in dangerous ways, ultimately contributing to his isolation and death.
"An AI companion is designed to create a relationship with the user, and our online experiences are designed to be positive," explains Susan Gonzales, founder of the nonprofit AIandYou. "It’s very enticing for someone who may be feeling lonely or depressed or bored to turn to this AI companion to feed them what they think they need."
In response, OpenAI stated, “Teen well-being is a top priority for us—minors deserve strong protections, especially in sensitive moments.” The company acknowledges that its safeguards can weaken during long conversations and is actively working to improve them.
Are Parental Controls the Whole Answer?
Despite the new features, some experts argue that the responsibility for safety is being unfairly shifted to parents. Emily Cherkin, a parent and advocate for intentional technology use, views AI tools as potentially addictive. She objects to the idea that parents must constantly manage settings, stating, "Any illusion of control still puts the burden on parents to have to go in and manually set something, or opt out of something, or block something."
Furthermore, these controls are not foolproof. Experts like Robbie Torney of Common Sense Media warn that "savvy teens can easily bypass parental controls" by creating a new account with a different email or using ChatGPT without logging in. Parents will be notified if a child unlinks their account, but circumvention remains a significant issue.
Stephen Balkam, founder of the Family Online Safety Institute (FOSI), believes a comprehensive solution is needed, involving better company policies, corporate responsibility, and parental vigilance. He stresses the importance of ongoing conversations with kids about AI use, covering topics from mental health to academic integrity. FOSI even offers a "family online safety agreement" to help establish clear expectations.
How to Set Up ChatGPT Parental Controls
For parents who want to implement these new safeguards, the first step is to link your ChatGPT account to your teen's. If you don't have an account, you can create one for free on the ChatGPT website or in the app.
To link to your child’s account:
- In your ChatGPT account, click the profile icon in the bottom left.
- Go to Settings > Parental controls > Add family member.
- Enter your child’s email address and select "My child."
This sends an invitation that your child must accept. Once linked, you can manage their settings from the "Parental controls" menu. Linking accounts does not grant you access to their chat history unless a safety notification is triggered.
A Detailed Look at Each Safety Feature
Here’s a breakdown of the specific controls you can manage for your teen’s account.
Reduce Sensitive Content
Once linked, ChatGPT should automatically reduce exposure to sensitive topics like graphic content, sexual roleplay, and extreme beauty ideals. However, experts caution that harmful conversations can still occur because chatbots are designed to be agreeable. You can verify this setting is active by going to Settings > Parental controls and ensuring "Reduce sensitive content" is toggled on.
Set Up Safety Notifications
This feature, on by default, alerts you if your teen’s conversations show warning signs of self-harm. OpenAI says a specialized team reviews these cases, and parents should be notified within hours. To manage how you receive these alerts (push, email, or SMS), go to Settings > Parental controls > Manage notifications.
Turn Off Image Generation
ChatGPT can generate a wide range of images, which introduces risks of creating inappropriate, explicit, or harmful content. To disable this, go to Settings > Parental controls and toggle off "Image generation."
Set Quiet Hours
To limit ChatGPT use to specific times, such as after school for homework, you can set a schedule. Navigate to Settings > Parental controls, toggle on "Quiet hours," and set the desired start and end times.
Decide Whether to Turn Off Chat History
ChatGPT's ability to remember past conversations can be useful for educational purposes but also carries the risk of fostering an unhealthy emotional attachment. Common Sense Media suggests leaving it on, as it may help the system identify patterns of distress over time. To turn it off, go to Settings > Parental controls and toggle off "Reference saved memories."
Opt Out of Model Training
By default, ChatGPT uses conversation data to train its models. While OpenAI states this is for platform improvement only, you can opt out to protect your family's privacy. In your teen's profile, go to Settings > Parental controls and toggle off "Improve the model for everyone." You can do the same for your own account under Settings > Data controls.
Courtney Lindwall
Courtney Lindwall is a writer at Consumer Reports. Since joining CR in 2023, she’s covered the latest on cell phones, smartwatches, and fitness trackers as part of the tech team. Previously, Courtney reported on environmental and climate issues for the Natural Resources Defense Council. She lives in Brooklyn, N.Y.