OpenAI published a note on August 26, 2025 explaining how it is improving the way ChatGPT responds when a person shows signs of crisis or emotional distress. The decision comes after recent cases where the system's responses were not sufficient and raised public concern. (openai.com)
What ChatGPT aims to do and how it responds today
The core idea is not to entertain you or keep you engaged. ChatGPT is designed to be helpful and, when it detects vulnerability, to activate a set of safeguards meant to protect the person. What does that mean in practice?
-
Recognize and respond with empathy: the model avoids giving instructions for self-harm and shifts to a supportive tone that acknowledges feelings.
-
Refer to real resources: if someone expresses suicidal intentions, the system suggests local hotlines—like 988 in the United States—or global resources through findahelpline.com. This behavior is built into the model. (openai.com)
-
Escalation for harm to others: when it detects plans to harm others, the conversation can move to a flow where human reviewers take action, including account sanctions and, in extreme cases, referral to authorities.
In simple terms: ChatGPT tries to contain, inform, and connect. It's not a human professional, but it aims to point people to real help when the situation requires it.
Where systems have failed and what they're fixing
There are areas where safeguards don't always hold up, especially in very long conversations. In short exchanges the protections work better; in lengthy chats you can see degradation that allows problematic responses after many back-and-forths.
There have also been cases where the detector underestimated the severity of a message and did not block content that should have been blocked. OpenAI is tuning those thresholds so protections trigger when they should.
One important point: OpenAI says it is not referring self-harm cases to the police in order to respect the privacy of the interaction, while it may escalate threats to others for human review. (openai.com)
What the update brings and key data
OpenAI indicates that GPT-5
became the default model in ChatGPT in August 2025 and that, thanks to new safety training techniques called "safe completions", the model reduces certain undesirable behaviors in mental health emergencies by more than 25% compared to the previous 4o
version. This doesn't eliminate all risks, but it's a measurable improvement. (openai.com)
Additionally, the company says it works with more than 90 clinicians in 30+ countries to guide these measures and form an advisory group that includes experts in mental health, youth development, and human-machine experience. That collaboration aims to make responses reflect current clinical and ethical practices. (openai.com)
Practical plans they announce
-
Improve reliability in long conversations so safeguards don't weaken over time.
-
Expand localization of resources and make one-click access to emergency services and hotlines easier.
-
Explore connections with certified professionals to offer care pathways before situations become acute.
-
Allow people to save trusted contacts or, with consent, let ChatGPT send a notice to a contact in severe cases.
-
Strengthen specific protections for teenagers and offer parental controls and supervision options with appropriate safeguards.
What you can keep in mind as a user
If you use ChatGPT and notice someone (you or someone else) is at risk, the right move is to prioritize human help: local hotlines, emergency services, or mental health professionals. AI can be a bridge, but it doesn't replace emergency services or a licensed therapist.
If you're a parent, teacher, or caregiver of teenagers, review the new control options and the recommendations for minors when they become available. Also bear in mind that tools improve, but they are not infallible.
Final reflection
The news shows that technology can become more responsible when combined with experts and clear criteria. Does this mean AI can already replace human help in crises? No. Does it mean AI can help more and more safely than before? Yes.
OpenAI has taken concrete steps but acknowledges the work continues. For those of us who develop and use these tools, the invitation is clear: keep pushing for improvements, human oversight, and real access to help when it's needed most. (openai.com)