OpenAI activates age prediction in ChatGPT to protect teens | Keryc
OpenAI has begun using age prediction in ChatGPT's consumer plans to estimate whether an account likely belongs to someone under 18. Why now? Because they want to apply different experiences and safeguards for teens: more opportunities, fewer risks.
What the age prediction does
The feature doesn't ask everyone to show ID. Instead, ChatGPT uses an age prediction model that analyzes account and behavior signals to estimate if the user is under 18.
It's not a definitive label: when the system isn't sure, it defaults to a safer experience. Worried it might block you by mistake? You can always verify your age and restore full access.
Signals the model considers
The model combines several practical indicators, including:
Account age.
Typical activity times.
Usage patterns over time.
The age you declare.
OpenAI says these signals help it learn what improves accuracy and that the model will be refined over time.
If you're placed in the underage experience by mistake
If ChatGPT places you in the underage experience by error, there's a quick and simple process to confirm your age: you can regain access with a selfie via Persona, a secure identity verification service. You can also check whether safeguards were applied to your account from Settings > Account.
Protections applied to accounts that appear to be of minors
When the model estimates an account may belong to someone under 18, ChatGPT applies protections to reduce exposure to sensitive content, for example:
Graphic violence or gore.
Viral challenges that could encourage risky behavior.
Sexual, romantic, or violent role play.
Representations of self-harm.
Content that promotes extreme beauty standards, unhealthy diets, or body shaming.
These restrictions are backed by academic literature on child development and consultation with experts, because teens often differ in risk perception, impulse control, and peer influence.
Parental controls and options for families
Beyond automatic protections, parents can customize their child's experience with parental controls that include:
Setting quiet hours when ChatGPT can't be used.
Controlling features like memory or use of data for model training.
Receiving notifications if signals of acute distress are detected.
These options aim to give families practical tools without complicating everyday use.
Implementation and monitoring
OpenAI is learning from the initial rollout and will keep improving the model's accuracy. In the European Union the feature will roll out in the coming weeks, taking regional requirements into account.
The company also says it will continue talking with experts and organizations like the American Psychological Association, ConnectSafely, and the Global Physicians Network to adjust policies and safeguards.
In the end, the idea is simple: give teens a more protected environment without stopping adults from using the tool the way they expect. Will it be perfect on day one? Probably not. But it's a transparent step toward balancing accessibility and protection, with clear ways to correct mistakes.