OpenAI improves ChatGPT: parental controls and reasoning

4 minutes
OPENAI
OpenAI improves ChatGPT: parental controls and reasoning

OpenAI announces a series of changes to make ChatGPT more useful in sensitive moments and for families. The company presents a 120-day plan with several improvements that will aim, among other things, to better detect distress situations, route delicate conversations to models that take more time to think, and offer parental controls for teenagers. (openai.com)

What OpenAI announced and why it matters

The official note, published on September 2, 2025, is clear: they want to accelerate improvements they've already been working on and share a 120-day timeline so people know where changes are headed. Why does this matter to you? Because it's not just new features — it's about how the tool will respond in critical moments and how families can have more control. (openai.com)

Reasoning models and routing in sensitive conversations

One of the main bets is that some conversations deemed sensitive will be sent to reasoning models, like GPT-5-thinking and o3, which spend more time processing context before replying. This is done through a realtime router that decides whether to use a more efficient model or one that deliberates longer depending on the conversation context. The idea is to get safer, more useful responses in critical moments. (openai.com)

What does this mean in simple terms?

Think of a system that detects signs of distress and, instead of giving a fast, generic answer, picks a model that 'thinks' more deeply to offer better-informed guidance. Not magic: it's extra training and a router that assigns resources based on what each conversation needs. This aims to cut down on wrong answers when faced with malicious instructions or adversarial prompts. (openai.com)

Experts, clinicians, and an evidence-driven approach

OpenAI says it will work with a Welfare Expert Council and a Global Network of Clinicians with more than 250 specialists who have contributed to health and behavior evaluations of the model. That support will be used to define wellbeing metrics, guide training, and design safeguards, especially on sensitive topics like mental health or adolescence. (openai.com)

Important: decisions still rest with the company, but they are made with clinical and academic advice to reduce risks and increase usefulness.

What's for families and teenagers

Among the new features is the upcoming launch, within a month, of parental controls. According to the description: parents will be able to link their account to a teenager's account (minimum age 13), set model behavior rules by age, turn off features like memory or chat history, and receive notifications if the system detects the teenager is in acute distress. These features will be on by default to help protect minors. (openai.com)

And if you're a practical parent? Imagine limiting the assistant's memory so it doesn't keep track of your teen's conversation habits, or getting an alert if the system senses risk in a long chat. It doesn't replace human conversation, but it adds a tech layer meant to accompany it. (openai.com)

What changes for users and the industry

For everyday users, the improvements promise more careful responses on delicate topics and reminders in long sessions. For developers and businesses, using the realtime router and reasoning models opens new possibilities: services that require more prudence can request to be handled by models designed to deliberate longer. All this also raises legitimate questions about privacy, transparency, and how those risk signals are measured. (openai.com)

What should you keep in mind as a user?

  • If you're a parent: be ready to review and accept the invite to link accounts if you want to use the controls.
  • If you use ChatGPT for sensitive work: check when and how your service might use a different reasoning model.
  • If you're worried about privacy: ask how detections and notifications are handled, and what data is shared with parents or support teams.

Final reflection

OpenAI's bet is clear: combine clinical expertise and models that 'think' more to respond better in difficult moments, while offering tools for families. Will it be enough? We won't know until we see the implementation and real results in users' and experts' hands. Meanwhile, the announcement points in a practical direction: AI not only creates new features, it aims to do so with safety criteria and human-backed support behind it.

To read the original announcement and full specifications, you can see OpenAI's official post. (openai.com)

Stay up to date!

Receive practical guides, fact-checks and AI analysis straight to your inbox, no technical jargon or fluff.

Your data is safe. Unsubscribing is easy at any time.