OpenAI published the philosophy of the Sora
feed, their new space to discover and create with AI. What do they want to achieve and how does it affect you as a user or creator? In short: prioritize creativity, give people more control, and balance freedom with safety.
Key principles
The central idea is simple and direct: help people learn what's possible and motivate them to create. Sound familiar? OpenAI explains that the feed's ranking is designed to favor creativity and active participation, not passive scrolling. Is that accidental? Not at all: they want the product to be a tool that inspires rather than just entertains.
Among the more concrete principles are:
- Optimize for creativity, favoring content that invites participation.
- Put users in control through a directional ranking you can adjust based on your mood.
- Prioritize connection between people; the feed favors content tied to communities and creators, and Cameo-like flows to encourage interactions.
- Balance safety with freedom of expression, with proactive guardrails and room for creativity.
How ranking and personalization work
The feed is personalized using a variety of signals. Which signals? Your activity on Sora
(posts, accounts you follow, likes, comments, and remixes), interaction with content, author signals like follower count and, if you choose to enable it, your history in ChatGPT
. They also consider the general location your device connects from to improve relevance. All this helps predict what content might inspire you or resonate with other creators.
Important: you can turn off the connection to your ChatGPT
history from the Data Controls in settings. There are also parental controls to limit personalization and continuous browsing for teens.
Safety and moderation: guardrails from creation
OpenAI emphasizes that the first line of defense is at the moment of creation. Since all content is generated within Sora
, they can apply restrictions from the start to prevent inappropriate sexual content, graphic violence involving real people, extremist propaganda, and messages that promote self-harm or dangerous behavior. They also filter content potentially harmful to minors and prioritize blocking what can cause the most harm.
Behind the feed there's a combination of automated tools and human review. Automated systems scan content according to OpenAI's Use Policies and are continuously updated, while a human team handles reports and reviews complex cases. If you see something you believe violates the rules, you can report it directly.
What this means for creators and users
Are you a creator? This pushes you to think about content that invites interaction and remixability. Are you a casual user? You'll get a feed that aims to surprise you with ideas and connections, not just hook you endlessly.
Also, the option to adjust the ranking gives you an uncommon tool: you don't just rely on the algorithm—you can explicitly tell it what you want to see. In practice, that can mean more control over your time and the quality of the content you consume.
One more step in the conversation about AI and responsibility
The post comes with a candid statement: they won't get everything right on day one. Recommendation systems evolve with use and feedback, and OpenAI asks for collaboration to fine-tune details in favor of safety and creativity.
If you want to read the original source or review the policies that back these decisions, you can check OpenAI's publication and their Use Policies to better understand the limits and how to report content.
Final reflection
In short, Sora
tries to be a feed designed to create and connect, with controls so you decide how much the algorithm interferes. Does the promise sound idealistic? Maybe, but there's a clear difference: it was designed with rules from the start and options for the user. Isn't it better to have control and guardrails than to learn to add them later?