OpenAI brings Sora 2 video workflows to developers with reference assets and batch rendering
Original: Your videos can go further now. We’re introducing new Video API capabilities, powered by Sora 2: • Custom characters and objects • 16:9 and 9:16 exports • Clips up to 20 seconds • Video continuation to extend scenes • Batch jobs for video generation View original →
What OpenAI posted on X
On March 12, 2026, OpenAI Developers said the Video API was gaining new capabilities powered by Sora 2: custom characters and objects, 16:9 and 9:16 exports, clips up to 20 seconds, scene continuation, and batch jobs for generation. The post matters because it framed Sora 2 as a developer workflow, not just a consumer-facing demo. OpenAI was signaling that teams could start treating video generation like a programmable media pipeline.
What the platform docs add
OpenAI’s current video generation guide says the Videos API supports creating new videos from prompts, guiding runs with image references, reusing character assets across generations, extending completed clips, editing existing videos, downloading outputs, and submitting large offline render queues through the Batch API. The same guide says sora-2 and sora-2-pro both support 16- and 20-second generations, while sora-2-pro is the option for 1080p exports in 1920x1080 and 1080x1920.
That set of controls is what moves Sora 2 closer to production use. Reference-guided generation and reusable character assets help maintain visual consistency across multiple shots. Clip extensions and editing reduce the need to regenerate an entire sequence when a team only needs to push a scene a little further or revise one segment.
Why it matters
For product teams, the practical change is throughput. Batch rendering turns Sora 2 from an interactive prompt toy into something that can sit behind a backlog of social clips, ads, explainers, or vertical and horizontal variants. OpenAI’s docs also describe the workflow as asynchronous, which fits render queues and webhook-based pipelines better than a synchronous request-response UX.
There is one strategic caveat. The current docs already mark the Sora 2 video generation models and Videos API for shutdown on September 24, 2026. That means the March announcement still reads as a real expansion of developer control, but it also suggests teams should treat the feature as a near-term production tool rather than a long-horizon platform commitment until OpenAI clarifies the successor path.
Sources: OpenAI Developers on X, OpenAI video generation guide.
Related Articles
OpenAI Developers said on March 12, 2026 that the Video API gained new Sora 2 features including custom assets, multiple aspect ratios, 20-second clips, continuation, and batch jobs. The update makes the API look more like production infrastructure than a simple demo endpoint.
OpenAI Developers posted on March 12, 2026 that the Video API now supports a broader Sora 2 workflow. The update adds reusable characters, video extensions, longer clips, portrait and landscape exports, and batch processing for studio-style pipelines.
OpenAI said on February 27, 2026 that Amazon will invest $50 billion and deepen their infrastructure relationship around Amazon Bedrock, OpenAI Frontier, and Trainium capacity. The agreement ties OpenAI's enterprise agent ambitions more closely to AWS distribution and long-term accelerator supply.
Comments (0)
No comments yet. Be the first to comment!