Google AI Expands Lyria 3 Pro Access Across Gemini, APIs, Vertex AI, and Google Vids
Original: If you’re ready to get into the studio with Lyria 3 Pro, here’s where to access the model: — @GeminiApp for Google AI Pro/Ultra subscribers — @GoogleAIStudio and the Gemini API — @producer_ai — Vertex AI — Google Vids for Workspace customers + Google AI Pro/Ultra subscribers View original →
Google AI said on X on March 25, 2026 that Lyria 3 Pro can now be accessed across a wide set of Google surfaces: the Gemini app for Google AI Pro and Ultra subscribers, Google AI Studio, the Gemini API, Vertex AI, ProducerAI, and Google Vids for Workspace customers. That breadth is the key news. Rather than keeping its latest music model in a single demo surface, Google is distributing it across consumer, developer, and enterprise products at the same time.
The official Google blog says Lyria 3 Pro is designed to help users scale music production and experiment with different styles. Google’s March 2026 Gemini Drop also says users can compose longer tracks up to three minutes. Together, those details suggest that the company is moving past short proof-of-concept clips and toward more usable soundtrack generation for creators, marketers, and product teams.
The access pattern matters for developers as much as for end users. Availability in Google AI Studio and the Gemini API lowers the barrier to prototyping music features inside applications, while Vertex AI gives enterprises a more formal path for controlled deployment. At the same time, distribution through Gemini and Google Vids brings the model into everyday creation tools where non-developers can test it without building anything from scratch.
In practical terms, Google is turning Lyria 3 Pro into a platform component rather than a niche feature. That does not answer every question around licensing, review workflows, or content provenance, but it does show where Google wants the model to live: inside mainstream creative tooling and developer infrastructure. The X post is concise, yet the product message is broad. Music generation is becoming part of Google’s general AI stack, not a side experiment.
Related Articles
Google said on March 25, 2026 that Lyria 3 Pro can generate tracks up to three minutes long and handle explicit musical structures such as intros, verses, choruses, and bridges. The official blog says the model is also expanding into Vertex AI public preview, AI Studio, Google Vids, and Music AI Sandbox workflows.
Google AI used X on March 6, 2026 to direct developers to Nano Banana 2, saying the model is available through the Gemini API in Google AI Studio and Vertex AI. Google’s linked post positions Nano Banana 2, or Gemini 3.1 Flash Image, as a high-quality and faster image model designed for real application workloads.
Cohere said on March 28, 2026 that Transcribe is setting a new bar for speech recognition accuracy in real-world noise and linked users to try it. The supporting Hugging Face materials position Transcribe as an Apache 2.0, 2B-parameter ASR model for 14 languages, while a companion WebGPU demo shows the model running locally in the browser.
Comments (0)
No comments yet. Be the first to comment!