Google launches Lyria 3 Pro with 3-minute, structure-aware music generation across Vertex AI and AI Studio

Original: Last month, we released Lyria 3, enabling you to create tracks with lyrics from text, image, or video prompts. Now, we’re introducing Lyria 3 Pro, which expands upon our music generation model to offer additional advanced capabilities. What’s really special about this upgrade is that the model now understands the architecture of music. This makes it possible to prompt for intros, verses, choruses and bridges + generate songs with more complex transitions. You can also create tracks up to 3 minutes long, a big change from previous models that were limited to 30 second tracks. Use Lyria 3 Pro to build upon your existing creativity. We’re excited for your beats to drop 🎶 View original →

Read in other languages: 한국어日本語
AI Mar 25, 2026 By Insights AI 2 min read 2 views Source

What Google posted on X

On March 25, 2026, Google said its music model Lyria 3 Pro is expanding beyond short clips into more structured composition. The X post highlighted two practical upgrades: the ability to generate tracks up to three minutes long, and a better grasp of musical architecture so users can prompt for intros, verses, choruses, and bridges rather than just a general mood.

That framing matters because it shifts the product from novelty generation toward workflow support for musicians, creators, and tool builders. Once a model can maintain song structure across longer durations, it becomes more useful for rough composition, soundtrack iteration, branded content, and prototyping inside creative software.

What Google's blog adds

The linked Google blog says Lyria 3 Pro is being brought into more professional and developer-facing surfaces. Google says the model is now in public preview on Vertex AI for businesses that need high-fidelity audio generation at scale, and that Lyria 3 Pro is also available alongside Lyria RealTime in Google AI Studio for developers building creative products.

Google also says Google Vids is adding Lyria 3 and Lyria 3 Pro for Workspace users who want custom music inside AI-assisted video creation, while its Music AI Sandbox continues to serve as a collaboration channel with artists, producers, and songwriters. The company presents the update as both a model improvement and a distribution expansion into the places where creators and software teams already work.

Why this matters

The significance here is not only that Google improved another generative media model. The bigger signal is that the company is treating music generation as a platform capability across enterprise APIs, developer tooling, and end-user creation products. That makes Lyria 3 Pro more relevant to product teams deciding whether AI audio belongs in business workflows, not just consumer experimentation.

There is still a difference between musically convincing demos and dependable production use. Rights, provenance, and workflow fit will remain central questions. But by extending Lyria 3 Pro into Vertex AI, AI Studio, and Google Vids at the same time, Google is clearly pushing music generation toward more operational, repeatable use cases.

Sources: Google AI X post · Google blog post

Share: Long

Related Articles

AI sources.twitter Mar 10, 2026 2 min read

Google AI used X on March 6, 2026 to direct developers to Nano Banana 2, saying the model is available through the Gemini API in Google AI Studio and Vertex AI. Google’s linked post positions Nano Banana 2, or Gemini 3.1 Flash Image, as a high-quality and faster image model designed for real application workloads.

AI Feb 19, 2026 2 min read

Google announced on February 18, 2026 that Lyria 3 music generation is rolling out in beta in the Gemini app. Users can create 30-second tracks from text or images, and all generated audio is marked with SynthID.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.