OpenAI announced the creation of an Expert Council on Well‑Being and AI to guide how its products, like ChatGPT and Sora, should behave in sensitive situations and support people’s mental health. The official note was published on October 14, 2025 and introduces eight specialists with experience in mental health, youth development, and human‑computer interaction. (openai.com)
What the Council is and why it matters
The idea is simple: bring together researchers and clinicians to advise OpenAI on what well‑being means in the context of AI and how to design practical safeguards. It isn’t about handing over decisions — OpenAI remains responsible for actions — but it commits to learning from this group of experts. (openai.com)
Why should this matter to you? Because these recommendations will shape how AI responds in moments of distress, how controls for minors are designed, and what limits apply when conversations get delicate. If you use ChatGPT or have someone young at home, this can change your everyday experience.
Who’s on the council
The council is made up of eight people with backgrounds at universities, hospitals, and organizations focused on digital health and youth development. They include:
- David Bickham, Ph.D. - Digital Wellness Lab, Boston Children’s Hospital and Harvard Medical School.
- Mathilde Cerioli, Ph.D. - Chief Scientific Officer at everyone.AI.
- Munmun De Choudhury, Ph.D. - Professor at Georgia Tech, specialized in digital mental health.
- Tracy Dennis‑Tiwary, Ph.D. - Professor and entrepreneur in digital therapies.
- Sara Johansen, M.D. - Clinician and founder of Stanford’s Digital Mental Health Clinic.
- David Mohr, Ph.D. - Northwestern University, expert in technology interventions for mental health.
- Andrew K. Przybylski, Ph.D. - Oxford, studies human behavior and technology.
- Robert K. Ross, M.D. - Public health and health philanthropy expert.
The list and affiliations come directly from OpenAI’s announcement. These voices aim to bring evidence and clinical perspective to product and policy decisions. (openai.com)
How they’ll work with OpenAI
Work kicked off with an in‑person session so members could meet OpenAI teams. From there, there will be regular meetings and reviews on topics such as:
- How the AI should respond in complex or sensitive situations.
- Which guardrails or limits best help users.
- How to define and measure positive impact on well‑being.
OpenAI says feedback has already influenced prior decisions — for example, the wording of notifications to parents when a teen might be at risk. (openai.com)
Relationship with other safety initiatives
This council doesn’t act alone. It will work alongside the Global Physician Network, a broader network of doctors and specialists who have evaluated the model’s behavior in health contexts. In addition, OpenAI plans to use reasoning models for sensitive conversations, routing higher‑risk situations to models that spend more time analyzing context before replying. (openai.com)
It’s also part of a push to strengthen protections for teenagers and families, within a wider initiative of improvements coming in the next months. (openai.com)
Parental controls and practical effects
As a concrete example, OpenAI has already worked on Parental Controls
that will let parents link their account to a teen’s account, adjust the model’s behavior by age, and receive notifications if the system detects signs of distress. These features aim to balance youth autonomy and family protection, and were developed with expert consultation. If you want technical details and timelines, OpenAI published a dedicated article about parental controls. (openai.com)
Does this mean AI is now a therapist? No. It means tech companies are incorporating clinical experts to reduce risks and improve responses at critical moments. The goal is for AI to be more helpful and responsible, not to replace professional care.
What you can expect and how to prepare
- If you’re a parent: watch for updates on
Parental Controls
and talk with your kids about limits and digital privacy. - If you work in education or health: this is an opportunity to collaborate and give feedback on how these tools should behave in real settings.
- If you’re a developer or entrepreneur: think about how to integrate safeguards and ethical considerations from the design stage.
Practical experience will change over time; the key is to demand transparency and evidence‑based evaluation.
Final reflection
Creating this council is a sign that the conversation about AI and well‑being is moving from purely technical debates into the human side. Will it work? That depends on transparency, on how recommendations are translated into real changes, and on involvement from diverse communities. OpenAI has taken the first step by bringing experts together — now we need to see concrete, verifiable results in users’ daily experiences.
Original source: OpenAI - Expert Council on Well‑Being and AI. (openai.com)