Back to all posts

ChatGPT for Teens A Parents Guide to New Controls

2025-09-21Techlicious, LLC3 minutes read
Parental Controls
Artificial Intelligence
Teen Safety

OpenAI has announced it's rolling out new accounts for ChatGPT specifically designed for teenagers, a move intended to make AI safer for younger users. Teens between the ages of 13 and 17 can now have accounts linked to a parent or guardian, complete with new controls for filtering content, setting usage times, and even alerting parents in cases of distress.

While these features seem like a step in the right direction, it's important to see them as guardrails, not a foolproof system. As a parent, I've learned that technology controls are helpful, but they can't replace open conversations about responsible online behavior. Just as teens found ways to create private Finsta accounts on Instagram, determined kids will likely find workarounds if they want unrestricted access to AI.

What Parents Can Control with ChatGPT Teen Accounts

OpenAI is making the setup process straightforward. A parent can send an email invitation from their own ChatGPT account to create a linked teen account. This automatically applies age-specific safety policies. From their dashboard, parents can:

  • Manage Key Features: You can decide whether to enable or disable features like memory and chat history for your teen’s account.
  • Set Blackout Hours: Block access to ChatGPT during sensitive times, like late at night or when homework should be the focus.
  • Guide AI Responses: Adjust the model's behavior to follow rules tailored for users under 18.
  • Receive Emergency Alerts: If the system detects signs of severe distress in a conversation, you will get a notification. In extreme situations where a parent is unreachable, OpenAI may contact emergency services.

This framework gives parents more direct oversight, but the core responsibility of teaching digital literacy remains firmly with them.

The Challenge of Verifying a Teens Age on AI

A bigger question looms over this new system: how will OpenAI know who is actually a teen? To enforce these rules, the company is building an "age prediction" model that estimates a user's age based on their conversation style and topics. If the system suspects a user might be a minor, it will restrict their access until they can provide proof of age.

This solution, while simple on the surface, opens a can of worms regarding privacy. Will every user's chat be scanned for clues about their age? What are the consequences if the system gets it wrong, either by failing to protect a teen in crisis or by incorrectly flagging an adult's private conversation?

The Safety vs Privacy Debate in AI for Teens

OpenAI has been transparent about its priorities, stating that teen safety will take precedence over user privacy and freedom. This means adults can still opt-in for more open-ended interactions, but any ambiguity will result in stricter controls. It's a classic tech dilemma: in the quest to "play it safe," our personal freedoms and privacy often get curtailed. We must consider if we are comfortable with AI companies monitoring our conversations to make judgment calls about our safety.

The new parental controls are set to launch by the end of the month, with the age-prediction system following later. This is a good opportunity for families to add another layer of digital protection, but it's not a signal to let an algorithm take over the job of parenting.

[Image credit: AI-generated via DALL·E]

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.