Google launches Veo 3.1 Lite as a lower-cost video model for developers

Original: Build with Veo 3.1 Lite, our most cost-effective video generation model View original →

Read in other languages: 한국어日本語
AI Apr 11, 2026 By Insights AI 1 min read 1 views Source

Google announced on March 31, 2026 that Veo 3.1 Lite is now available to developers as the company’s most cost-effective video generation model. Google said Veo 3.1 Lite is priced at less than 50% of Veo 3.1 Fast while delivering the same speed, giving developers another tradeoff point inside the Veo 3.1 family.

The product positioning is aimed at builders who need scale more than premium output at any cost. Google said Veo 3.1 Lite supports both Text-to-Video and Image-to-Video generation, offers landscape 16:9 and portrait 9:16 framing, and can produce 720p and 1080p output. Developers can also choose 4s, 6s, or 8s durations depending on the application and budget.

Availability is also important. Google said the model is rolling out through the paid tier of the Gemini API and Google AI Studio, which makes it part of the same developer stack already used for Gemini-based application building. The company also said that on April 7 it would reduce pricing for Veo 3.1 Fast, suggesting a broader push to lower the cost barrier for commercial video generation.

For developers, this changes the economics of shipping video features into products where volume matters. Marketing tools, social video workflows, creative prototyping, and app-integrated generation systems often care as much about predictable cost and throughput as about absolute model quality. A cheaper model with the same speed can therefore open more room for experimentation and iterative design.

The larger signal is that competition in generative video is shifting from headline demos toward pricing tiers, production controls, and API availability. Veo 3.1 Lite is not just a smaller model announcement. It is Google’s attempt to make video generation fit more real product budgets, which is often what determines whether an AI capability remains a demo or becomes a shipping feature.

Share: Long

Related Articles

AI Reddit 6d ago 2 min read

Netflix’s VOID reached Reddit as an open research release aimed at removing objects from video and repairing the interactions those objects caused in the scene. The notable details are the CogVideoX base, a two-pass pipeline, Gemini+SAM2 mask generation, and a 40GB+ VRAM requirement.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.