Google launches Veo 3.1 Lite as a lower-cost video model for developers
Original: Build with Veo 3.1 Lite, our most cost-effective video generation model View original →
Google announced on March 31, 2026 that Veo 3.1 Lite is now available to developers as the company’s most cost-effective video generation model. Google said Veo 3.1 Lite is priced at less than 50% of Veo 3.1 Fast while delivering the same speed, giving developers another tradeoff point inside the Veo 3.1 family.
The product positioning is aimed at builders who need scale more than premium output at any cost. Google said Veo 3.1 Lite supports both Text-to-Video and Image-to-Video generation, offers landscape 16:9 and portrait 9:16 framing, and can produce 720p and 1080p output. Developers can also choose 4s, 6s, or 8s durations depending on the application and budget.
Availability is also important. Google said the model is rolling out through the paid tier of the Gemini API and Google AI Studio, which makes it part of the same developer stack already used for Gemini-based application building. The company also said that on April 7 it would reduce pricing for Veo 3.1 Fast, suggesting a broader push to lower the cost barrier for commercial video generation.
For developers, this changes the economics of shipping video features into products where volume matters. Marketing tools, social video workflows, creative prototyping, and app-integrated generation systems often care as much about predictable cost and throughput as about absolute model quality. A cheaper model with the same speed can therefore open more room for experimentation and iterative design.
The larger signal is that competition in generative video is shifting from headline demos toward pricing tiers, production controls, and API availability. Veo 3.1 Lite is not just a smaller model announcement. It is Google’s attempt to make video generation fit more real product budgets, which is often what determines whether an AI capability remains a demo or becomes a shipping feature.
Related Articles
Together AI said on April 3, 2026 that Wan 2.7 from Alibaba Cloud is now available on its platform. The accompanying product post says text-to-video is live now, with image-to-video, reference-to-video, and video edit workflows rolling out on the same API, auth, and billing surface.
Netflix’s VOID reached Reddit as an open research release aimed at removing objects from video and repairing the interactions those objects caused in the scene. The notable details are the CogVideoX base, a two-pass pipeline, Gemini+SAM2 mask generation, and a 40GB+ VRAM requirement.
Anthropic said on April 7, 2026 that it has signed a deal with Google and Broadcom for multiple gigawatts of next-generation TPU capacity coming online from 2027. The company also said run-rate revenue has surpassed 30 billion dollars and more than 1,000 business customers are now spending over 1 million dollars annually.
Comments (0)
No comments yet. Be the first to comment!