OpenAI launches Safety Fellowship for researchers | Keryc
Today OpenAI opens applications for the OpenAI Safety Fellowship, a program aimed at researchers, engineers, and practitioners who want to do rigorous, high-impact research on the safety and alignment of advanced AI systems.
Are you interested in researching problems that matter today and tomorrow? This program runs from September 14, 2026 to February 5, 2027 and combines mentorship, resources, and a peer cohort to accelerate concrete results.
What the program offers
Duration: September 14, 2026 to February 5, 2027.
Financial support: monthly stipend and compute support.
Resources: API credits and other appropriate resources; does not include access to internal systems.
Mentors: close work with OpenAI mentors and collaboration within the cohort.
Physical space: workspace available in Berkeley alongside other Constellation fellows, with a remote work option.
Expected deliverables: a substantial research product at the end (for example, a paper, benchmark, or dataset).
Who they are looking for
They want people who can research and execute, not necessarily those with specific credentials. Thematic priorities include, among others:
security evaluation
ethics and governance
robustness and scalable mitigations
privacy-preserving safety methods
agentic oversight
high-severity misuse domains
Profiles in computer science, social sciences, cybersecurity, privacy, HCI, and related areas are valued. References are also required.
How to apply and deadlines
Applications are open now and close May 3.
Review of applications and notification to selected candidates: July 25.
More details about eligibility, compensation, and benefits are in the application form. You can start your application here.
For questions about the application process write to openaifellows@constellation.org.
What really matters if you apply
Do you have an empirically grounded and technically solid idea? That’s what they’re looking for. It’s not just theory: they value work with evidence, clear metrics, and relevance to the research community.
Think about projects that, by the end of the fellowship, leave something reusable: a new benchmark, a well-documented dataset, or a paper with reproducible evaluations. For example, a benchmark others can run on a standard laptop, or a dataset organized so teams can pick it up and extend it easily.
If you work on model robustness, privacy, or human evaluation of risks, your proposal might be a good fit.
Remember: you’ll get support and visibility, but not internal access to OpenAI systems. Still, the mix of mentorship, compute, and a focused cohort can speed your project up a lot.
This is an opportunity to bring safety research closer to practical problems and diverse audiences. Do you have a research question that could improve how we build and govern AI? Maybe this fellowship is the push you need.