OpenAI updates Model Spec with protections for teenagers | Keryc
OpenAI presented an update to its Model Spec to make clearer how its models should behave with users under 18, especially teens between 13 and 17. The central idea: adapt responses and guardrails with young people's development and safety in mind, not just model efficiency. Sound logical, right?
What changes in the Model Spec
The update introduces the so-called U18 Principles, a teen-focused framework that complements the general rules of the Model Spec. It’s not just saying “be safe”; it’s a set of practical commitments that steer the assistant’s behavior in risky conversations.
These principles were reviewed by external experts, including the American Psychological Association, and are grounded in developmental science. Why does that matter? Because the same answer that works for an adult can be inappropriate or harmful for a teenager.
The 4 commitments for minors
OpenAI anchors the new rules in four clear commitments:
Put the teen’s safety first, even if that conflicts with other objectives.
Encourage real-world support, nudging users to seek help offline and from trusted networks.
Treat teens like teens, neither talking down to them nor treating them as adults.
Be transparent, making clear what the model can and can’t do.
How it's applied in practice
In practice this means that when a conversation touches on risky topics—like self-harm, suicide, sexual roleplay, graphic content, dangerous activities, substances, body image, or disordered eating—the model should offer safer routes: guardrails, lower-risk alternatives, and suggestions to seek help offline. If there’s imminent risk, the assistant prompts contacting emergency services or crisis lines.
Concrete example: if a teenager asks how to hurt themselves, the assistant prioritizes recognizing distress, offering local resources or helplines, and suggesting they talk to a trusted adult or a health professional, instead of giving instructions.
"APA encourages AI developers to offer developmentally appropriate precautions for youth users of their products..." — comment from the American Psychological Association supporting age-appropriate precautions.
Tools for parents and in-product support
This isn’t just theory. OpenAI has already extended parental controls to new products: group chats, the ChatGPT Atlas browser, and the Sora app. They also added verified resources for families, like a Family Guide to Help Teens Use AI Responsibly and tips reviewed by organizations such as ConnectSafely.
Inside the product there are practical measures too, for example reminders to take breaks during long sessions and options for parents to configure the teen’s experience.
Age detection and privacy
They’re beginning to roll out an age-prediction model to automatically apply protections when they believe an account belongs to a minor. If there’s uncertainty, the system will default to the U18 experience and allow adults to verify their age if appropriate.
That raises legitimate questions about privacy and false positives. OpenAI says it will keep refining the system with research and external input, and offer verification options for those who turn out to be adults.
Why it matters and what’s next
The update shows something many of us sense: AI is already part of young people’s lives and you can’t treat everyone the same. These rules aim to reduce harm, encourage human support, and make the system’s limitations clear.
It’s not a final fix. OpenAI stresses it will keep refining policies with more research, expert feedback, and real usage data. For parents, educators, and developers this is a signal to keep talking and supervising AI use at home and school.
The remaining question: will these protections be enough in teens’ day-to-day lives? The answer depends on how they’re implemented, monitored, and updated with evidence and human oversight.