Sora 2: OpenAI launches video generation with built-in safety | Keryc
Sora 2 and the Sora app arrive with a clear promise: state-of-the-art AI video with safety designed from the start. Sounds good, right? Yes — but what matters most is that the company proposes concrete measures so creativity doesn’t leave people and rights unprotected.
What Sora 2 and the Sora app are
Sora 2 is the model behind new video-generation capabilities, and the Sora app is the interface where people create and share. It’s not just about more realism and movement: it’s a bet on tools that let you collaborate and produce audiovisual content with controls built in.
Why does this change anything for you? Because now it’s not only about quality, but about responsibility baked into the product from the design phase.
Provenance signals and watermarking
One of OpenAI’s first defenses is that every video generated with Sora includes provenance signals, both visible and invisible. This includes metadata in the C2PA standard and internal digital marks.
They also maintain their own reverse searches (image and audio) that help trace videos back to Sora with high accuracy, and many results carry dynamic watermarks that show the creator’s name. In short: the system aims for verifiable origin, not a black box.
Photos-to-video with people: consent and limits
Sora lets you convert photos of family and friends into videos, but with strict requirements. Before you upload an image with people, you must declare that you have the consent of those pictured and the rights to use that material.
Generations that include people are subject to stricter guardrails than other features (even more so when children or people who appear to be minors are involved). And when shared, those videos always carry a watermark. OpenAI stresses that protection and consent come first.
Characters and control of your likeness
To give you control over your appearance and voice, there’s a characters feature (Sora Characters, formerly called cameo). A character captures your visual and vocal likeness, but only you decide who can use it.
You can revoke access at any time. OpenAI blocks representations of public figures unless they’re used through the characters system. Also, any video that includes your character is visible to you, even if someone else created it: you’ll be able to review, delete, or report quickly.
There are options to apply an even stricter set of guardrails: limit major appearance changes, prevent humiliating scenes, and keep identity consistent. It’s a control layer designed to protect dignity and reputation.
Protections for teenagers
Sora includes additional protections for younger users. The feed is designed to be appropriate for all ages and potentially harmful or inappropriate content is filtered on teen accounts.
Adults cannot initiate messages to teen profiles and there are parental controls (via ChatGPT) to manage whether kids can send or receive messages, and to enable a non-personalized feed. By default there are also limits on continuous browsing time for teen accounts.
Content filtering and guardrails in generation
To prevent harm before it happens, Sora applies barriers at creation time: it reviews prompts and outputs across multiple frames and the audio transcript to block sexual material, violent propaganda, self-harm, and other dangerous content.
They ran red teaming to explore risks and tightened policies more than they did for image generation, given the higher realism and presence of motion and sound. Beyond automated filtering, there’s human review focused on higher-impact risks.
Audio and copyright
Audio complicates the equation, but Sora includes automatic scans of transcripts to detect policy violations. It also blocks attempts to generate music that imitates living artists or existing works. If a creator believes their work was infringed, OpenAI accepts takedown requests and takes action.
User control and avenues for recourse
You decide when and how to share your videos. You can delete your published content and always have the option to report videos, profiles, private messages, comments, or characters for abuse.
Blocking accounts prevents others from seeing your profile, using your character, or contacting you. In short: there are clear tools to manage exposure and respond to violations.
Sora doesn’t promise perfection, but it does promise a design where technical capability and social and legal measures go hand in hand. Is that enough for you? That depends on how the implementation evolves and on continuous vigilance from the community and regulators.