OpenAI details Sora’s safety stack with C2PA, consent controls, and teen protections
Original: Creating with Sora safely View original →
What happened
OpenAI used a March 23, 2026 safety post to explain how the Sora 2 model and the Sora app are being governed in production. The company’s message is that video generation is no longer just a model-quality problem. It is a provenance, consent, moderation, and youth-safety problem as well. Rather than emphasizing only output quality, OpenAI laid out the control stack it says now sits around Sora from creation through sharing.
The most concrete measure is provenance. OpenAI says every Sora video includes both visible and invisible signals, and that all outputs embed C2PA metadata. The company also says it maintains internal reverse-image and audio search tools that can trace videos back to Sora with high accuracy. Many outputs additionally carry visible moving watermarks that include the creator’s name. Taken together, those controls are meant to make synthetic video easier to identify even after clips leave the app.
Consent and age controls
OpenAI also described stricter handling for videos that involve real people. Users can generate image-to-video content from photos of family and friends only after attesting that they have consent and the rights to upload the media. Content involving kids or young-looking people faces even tighter guardrails, and shared outputs in those cases are always watermarked. The company’s Characters feature is positioned as a consent-based way to manage a person’s appearance and voice likeness, with revocable access and owner visibility into videos that use that character.
On the youth side, OpenAI says teen accounts receive stronger protections around mature output, that adult users cannot initiate messages with teens, and that parental controls in ChatGPT can manage direct messages and a non-personalized feed in Sora. Teen users also get limits on continuous scrolling. For harmful content, OpenAI says prompts and outputs are checked across video frames and audio transcripts, with specific blocking for sexual material, terrorist propaganda, and self-harm promotion. Audio safeguards also aim to prevent imitation of living artists or existing works.
Why it matters next
The broader significance is that frontier video systems are now shipping with a more complete governance layer, not just a content policy. The real test is whether provenance survives reposting, whether consent features hold up under abuse pressure, and whether other video-AI providers adopt similar defaults instead of leaving identification and youth protections to downstream platforms.
Related Articles
After Trump ordered federal agencies to stop using Anthropic AI, the Pentagon designated the firm a national security supply chain risk—and OpenAI secured a competing Defense Department agreement within hours.
OpenAI said on February 28, 2026 that it reached an agreement with the Department of War for classified AI deployments, and posted a March 2 update adding explicit domestic-surveillance limitation language. The company highlights cloud-only deployment, retained safety-stack control, and cleared personnel-in-the-loop safeguards.
OpenAI Japan on March 17, 2026 introduced the Japan Teen Safety Blueprint as a regional framework for teen use of generative AI. It combines risk-based age estimation, tighter under-18 protections, parental controls, and well-being-focused product design into one policy package.
Comments (0)
No comments yet. Be the first to comment!