OpenAI announced a grants program to fund independent research at the intersection of artificial intelligence and mental health. The fund will offer up to $2 million for projects that produce data, evaluations, or actionable ideas to help make AI systems safer and more useful in sensitive conversations.
What is OpenAI announcing?
The program, run by OpenAI’s Safety team and funded by OpenAI Group PBC, is looking for independent proposals that explore risks and benefits of using AI in mental health contexts. Applications are open starting today through December 19, 2025, and selections will be notified no later than January 15, 2026.
Up to $2 million in grants to encourage interdisciplinary work between technical researchers, mental health professionals, and people with lived experience.
What kinds of projects are they looking for?
They value proposals that deliver clear outputs: datasets, evaluations, rubrics, or syntheses of experiences that can inform safety practices. Sound useful? Here are example areas of interest:
- How expressions of distress, delusions, or other mental-health-related language vary across cultures and languages, and how that affects AI interpretation.
- Perspectives from people with lived experience about what feels safe, helpful, or harmful when interacting with chatbots.
- How mental health providers currently use AI tools: what works, what fails, and where safety risks emerge.
- The potential for AI to encourage healthy behaviors and reduce harm.
- Robustness of safeguards against jargon, slang, or underrepresented linguistic patterns, especially in low-resource languages.
- Tone, style, and framing adjustments when responding to children and adolescents, with deliverables like evaluation rubrics or annotated examples.
- How stigma around mental illness can appear in model recommendations or interaction styles.
- AI interpretation of visual indicators related to body dysmorphia or eating disorders and the ethical creation of annotated multimodal datasets.
- AI capabilities to offer compassionate support in grief processes, including response patterns and style guides.
Why does this matter now?
Have you wondered how a chatbot responds when someone shows signs of distress? As AI tools enter more personal spaces, understanding their limits and how to strengthen them isn’t just technical — it’s a matter of public safety and ethics.
OpenAI has already worked on studies like Investigating Affective Use and Emotional Well-being on ChatGPT and Healthbench. These grants aim to expand that effort with independent research that adds diversity of approaches and cultural contexts.
Practical and ethical considerations
If you’re thinking of applying, keep a few key points in mind:
- Consent and responsible data collection, especially with vulnerable populations.
- Inclusion of people with lived experience from study design onward, not just as subjects.
- Clarity in deliverables: well-documented datasets, reproducible rubrics, or actionable analyses for safety practices.
- Cultural and linguistic sensitivity: validate tools in local contexts and in low-resource languages.
It’s not just about building better models; it’s about doing it carefully so you don’t cause harm and so results are useful outside controlled environments.
Who should apply and how to make the most of this opportunity?
Interdisciplinary teams have an advantage: technical researchers working with mental health professionals and people with lived experience. Small academic teams, NGOs, or international consortia that can deliver reusable resources (datasets, evaluations, guidelines) are ideal candidates.
If your project fits, prepare a proposal that clearly explains the deliverables, ethical methods for collecting data, and how your results can inform AI safety practices.
The call closes on December 19, 2025, with notifications expected by January 15, 2026.
Thinking about AI and mental health isn’t only thinking about technology: it’s thinking about how technology affects lives with cultural, emotional, and ethical nuance. These grants are an invitation to research with rigor and empathy.
