OpenAI shares an important update about how ChatGPT
will adapt to young users and families.
The company says it's building a system to estimate whether someone is over or under 18, with the goal of offering different experiences and protections based on age. This piece summarizes what was announced, why it matters, and what parents, teens, and developers can expect. (openai.com)
What OpenAI announced and why
OpenAI published the note on September 16, 2025, and says it's working on a long-term system to understand whether a person is older or younger than 18. The idea is that ChatGPT
will respond differently to teens and adults, applying more conservative rules when appropriate. (openai.com)
Does that mean perfect accuracy? No — and OpenAI is upfront about it. When the system is unsure or lacks information, ChatGPT
will take the safer route and present the experience for minors. Adults will also be able to prove their age to unlock adult features. (openai.com)
Parental controls: what they'll bring and how they work
While the age-prediction tech is being developed, OpenAI says parental controls will be the most reliable tool for families. They'll be available before the end of the month and will allow several concrete actions:
- Link a parent's account to a teen's account (minimum age 13) via an email invitation.
- Guide how the model behaves toward the teen, applying rules specifically for younger users.
- Turn off features like memory and chat history.
- Receive notifications if the system detects the teen is in acute distress. If parents can't be reached in a rare emergency, the note says authorities could be involved. There will also be an option to set blackout hours when the teen cannot use
ChatGPT
. (openai.com)
What this means for parents and teens
Is this invasive or useful? It depends on how much value you put on safety versus privacy. For a parent who wants to prevent their child from seeing graphic sexual content, these tools offer direct control. For a teen who cares about privacy, the ability to prove age to regain adult access will be key.
Concrete example: imagine a teenager who uses ChatGPT
to study and sometimes asks about sensitive topics. With parental controls, parents can limit features like memory or chat history and set hours without access, avoiding late-night sessions. For emergency decisions, the system tries to balance notifying family and escalating only when there's real risk. (openai.com)
Reasonable doubts and technical limits
Some questions are probably on your mind: how accurate will the age prediction be? How will identity be verified without compromising privacy? OpenAI admits that even advanced systems make mistakes, so their policy is to default to the minor experience when there's doubt. They also promise options for age verification for adults. (openai.com)
From a technical and ethical perspective, two points always matter:
- False positives can wrongly restrict adult rights or access to content.
- False negatives can leave minors exposed to harmful material.
That's why design decisions include input from experts, advocacy groups, and regulators. (openai.com)
What should developers and families do now?
-
If you're a parent: review the new parental control options when they arrive, talk with your child about why you're enabling them, and adjust settings based on maturity and usage.
-
If you're a developer or product maker: pay attention to design ethics. Consider how your service interacts with systems that distinguish age and prepare clear flows for verification and appeal.
-
If you're a teenager: learn about the tools, what gets recorded, and how you can prove your age if you need to regain adult access.
Final reflection
It's not a perfect solution, but it's a concrete step toward bringing more protection for young users inside ChatGPT
. OpenAI says it will keep learning and adjusting with help from experts and the community. If you want to read the original source, check OpenAI's official note. (openai.com)