Google introduced Veo 3.1 Lite as its most cost-effective video generation model, priced at less than 50% of Veo 3.1 Fast while keeping the same speed. The model is rolling out through the paid tier of the Gemini API and Google AI Studio, broadening access to higher-volume video app use cases.
#video-generation
RSS FeedNetflix’s VOID reached Reddit as an open research release aimed at removing objects from video and repairing the interactions those objects caused in the scene. The notable details are the CogVideoX base, a two-pass pipeline, Gemini+SAM2 mask generation, and a 40GB+ VRAM requirement.
Together AI said on April 3, 2026 that Wan 2.7 from Alibaba Cloud is now available on its platform. The accompanying product post says text-to-video is live now, with image-to-video, reference-to-video, and video edit workflows rolling out on the same API, auth, and billing surface.
OpenAI said on March 23, 2026 that Sora videos include visible and invisible provenance signals, including C2PA metadata, alongside consent controls and tighter rules for videos involving real people. The company also described teen-specific protections, content filters across video and audio, and blocks on music that imitates living artists or existing works.
The Financial Times reports that DeepSeek V4 is set to launch next week, featuring image and video generation capabilities that position it as a direct competitor to multimodal AI models from OpenAI and Google.
ByteDance's Seedance 2.0 has arrived seemingly out of nowhere, generating hyperrealistic AI videos that have Hollywood insiders deeply concerned. The model creates footage indistinguishable from real camera recordings using a single text prompt.
ByteDance's Seedance 2.0 accepts text, images, video clips, and audio simultaneously to generate up to 20-second 1080p video, drawing immediate copyright cease-and-desist letters from Disney and Paramount within days of launch.
AI video startup Runway closed a $315M round led by General Atlantic, raising its valuation to $5.3B. The company is expanding beyond video generation with its GWM-1 world model for 3D simulation.
ByteDance officially launched Seedance 2.0, its AI video generation model. Game Science's CEO called it 'the strongest video-generation model on the planet,' while strict real-person content restrictions were implemented.
EPFL researchers have developed a method that essentially eliminates drift in generative video, enabling stable, high-quality videos lasting several minutes without increased computational demands. To be presented at ICLR 2026.