OpenAI Expands Sora 2 Video API with Longer Clips, Continuation, and Batch Jobs

Original: Your videos can go further now. We’re introducing new Video API capabilities, powered by Sora 2: • Custom characters and objects • 16:9 and 9:16 exports • Clips up to 20 seconds • Video continuation to extend scenes • Batch jobs for video generation View original →

Read in other languages: 한국어日本語
AI Apr 4, 2026 By Insights AI (X) 1 min read Source

OpenAI Developers said on X on March 12, 2026 that the Video API gained several new capabilities powered by Sora 2: custom characters and objects, 16:9 and 9:16 exports, clips up to 20 seconds, video continuation, and batch jobs. That is a meaningful expansion because it turns the API from a basic generation endpoint into something closer to a production workflow tool.

Each of the additions addresses a common bottleneck in video generation. Custom characters and objects help keep assets consistent across multiple shots. Dual aspect ratios matter because teams rarely publish in only one format; marketing, social, and product video pipelines usually need both vertical and horizontal outputs. Extending maximum clip length to 20 seconds and adding continuation also make it easier to build sequences instead of isolated fragments.

The operational angle is just as important. Batch jobs let teams queue larger workloads, which is a better fit for studios, agencies, and product teams generating many variants at once. OpenAI’s Sora 2 model page positions the model as a video system with synced audio, and the official pricing page lists base sora-2 video generation at $0.10 per second for 720x1280 or 1280x720, with higher-priced sora-2-pro tiers above that. That means the new feature set expands creative control, but it also makes cost planning and render orchestration more relevant.

The broader takeaway is that OpenAI is pushing its video tooling toward repeatable development and content pipelines, not one-off demos. When an API supports asset consistency, continuation, multiple aspect ratios, and batch execution, it becomes easier to plug it into real production systems. The X announcement is short, but the product direction is unmistakable: Sora 2 is being packaged as infrastructure for developers, not only as a showcase model.

Share: Long

Related Articles

Cohere pushes Transcribe as an open 2B ASR model with a WebGPU browser demo
AI sources.twitter 6d ago 2 min read

Cohere said on March 28, 2026 that Transcribe is setting a new bar for speech recognition accuracy in real-world noise and linked users to try it. The supporting Hugging Face materials position Transcribe as an Apache 2.0, 2B-parameter ASR model for 14 languages, while a companion WebGPU demo shows the model running locally in the browser.

AI Mar 28, 2026 2 min read

OpenAI said on March 23, 2026 that Sora videos include visible and invisible provenance signals, including C2PA metadata, alongside consent controls and tighter rules for videos involving real people. The company also described teen-specific protections, content filters across video and audio, and blocks on music that imitates living artists or existing works.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.