OpenAI shares how it plans for ChatGPT to recognize whether the person chatting is over or under 18 to adapt the experience. Why does this matter today and not years from now? Because decisions about safety and content affect teenagers right now, in their studies and daily lives. (openai.com)
Age prediction
The core idea is to build a long-term system that tries to determine if someone is over or under 18, and automatically adjust how ChatGPT responds. The goal is that interacting with a teen feels different from interacting with an adult. (openai.com)
This is not a perfect label. OpenAI acknowledges that even very advanced systems will sometimes get it wrong. When the system isn’t confident enough or the information is incomplete, it will choose the safer option: apply the under-18 experience
. Sound prudent to you? Me too. (openai.com)
If a person is identified as under 18, ChatGPT will apply age-appropriate content policies. (openai.com)
What that experience for minors covers
Among the explicit measures OpenAI mentions are blocking graphic sexual content and, in rare cases of acute distress, involving authorities to ensure safety. It’s a fine line between protection and privacy, and the company says it will prioritize youth safety when its principles collide. (openai.com)
OpenAI also notes that adults will have ways to prove their age and regain adult capabilities if they want. In other words, it’s not a permanent lock: there will be verification mechanisms for those who need unrestricted features. (openai.com)
Parental controls
While the prediction system matures, OpenAI proposes parental controls as the most reliable way for families to manage how ChatGPT is used at home. These controls will include linking a parent’s account to a teen’s account (minimum age 13), and settings about how the model responds to the minor. (openai.com)
The planned features are concrete and practical:
- Invite and link accounts by email.
- Set specific behavior rules for teenagers.
- Turn off features like
memory
or chat history. - Receive notifications if the system detects an episode of acute distress; if the family cannot be reached, authorities might be involved in rare cases.
- Schedule hours when the teen can’t use ChatGPT.
All of this was announced with the intent to roll it out before the end of the month. (openai.com)
A practical example
Imagine a mom who wants her child to study with ChatGPT but without access to certain types of content. With these controls she can limit features, get alerts, and set hours without access so the kid sleeps better. Isn’t that what many families are looking for today? Examples like this help explain why OpenAI prioritizes family tools while it fine-tunes automatic prediction. (openai.com)
Doubts and risks worth remembering
Predicting age from language or behavior isn’t perfect. There’s a risk of false positives and negatives, and automated decisions can affect the privacy and autonomy of both young people and adults. That’s why OpenAI emphasizes it will keep consulting experts, advocates, and policymakers as it moves forward. (openai.com)
It’s also important to ask: who defines what is “safe” or “appropriate” for each age? That definition needs transparency and community input. OpenAI says it’s listening to organizations and experts as it implements these changes. (openai.com)
What's next
OpenAI published this announcement on September 16, 2025 and promises to keep sharing progress in the process. Meanwhile, families can expect parental controls and the company will keep refining its approach to youth safety, privacy, and responsible access. (openai.com)
Think of this as a first stage: protection is being built, but it’s also time to ask questions, give opinions, and take part. Do you want technology to intervene when it detects a crisis, or would you prefer more human control and less automation? These are decisions that affect us all.