OpenAI details Sora’s safety stack with C2PA, consent controls, and teen protections

Original: Creating with Sora safely View original →

Read in other languages: 한국어日本語
AI Mar 28, 2026 By Insights AI 2 min read Source

What happened

OpenAI used a March 23, 2026 safety post to explain how the Sora 2 model and the Sora app are being governed in production. The company’s message is that video generation is no longer just a model-quality problem. It is a provenance, consent, moderation, and youth-safety problem as well. Rather than emphasizing only output quality, OpenAI laid out the control stack it says now sits around Sora from creation through sharing.

The most concrete measure is provenance. OpenAI says every Sora video includes both visible and invisible signals, and that all outputs embed C2PA metadata. The company also says it maintains internal reverse-image and audio search tools that can trace videos back to Sora with high accuracy. Many outputs additionally carry visible moving watermarks that include the creator’s name. Taken together, those controls are meant to make synthetic video easier to identify even after clips leave the app.

Consent and age controls

OpenAI also described stricter handling for videos that involve real people. Users can generate image-to-video content from photos of family and friends only after attesting that they have consent and the rights to upload the media. Content involving kids or young-looking people faces even tighter guardrails, and shared outputs in those cases are always watermarked. The company’s Characters feature is positioned as a consent-based way to manage a person’s appearance and voice likeness, with revocable access and owner visibility into videos that use that character.

On the youth side, OpenAI says teen accounts receive stronger protections around mature output, that adult users cannot initiate messages with teens, and that parental controls in ChatGPT can manage direct messages and a non-personalized feed in Sora. Teen users also get limits on continuous scrolling. For harmful content, OpenAI says prompts and outputs are checked across video frames and audio transcripts, with specific blocking for sexual material, terrorist propaganda, and self-harm promotion. Audio safeguards also aim to prevent imitation of living artists or existing works.

Why it matters next

The broader significance is that frontier video systems are now shipping with a more complete governance layer, not just a content policy. The real test is whether provenance survives reposting, whether consent features hold up under abuse pressure, and whether other video-AI providers adopt similar defaults instead of leaving identification and youth protections to downstream platforms.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.