Google rolls out Gemini 3.1 Flash Live across Gemini Live, Search Live, and AI Studio
Original: Gemini 3.1 Flash Live is rolling out now in Gemini Live in the @GeminiApp and @Google Search Live. Developers can start building in @GoogleAIStudio. Find out more → goo.gle/3PzM6qP View original →
What Google is rolling out
Google DeepMind said on March 26, 2026 that Gemini 3.1 Flash Live is rolling out across consumer and developer surfaces at the same time. In the company’s X post, Google said the model is now arriving in Gemini Live inside the Gemini app and in Google Search Live, while developers can begin building with it in Google AI Studio. The linked product post on Google’s official blog describes the release as Google’s highest-quality audio model for natural and reliable real-time dialogue.
That framing matters because voice systems are increasingly judged on more than recognition accuracy. For live assistants, users notice hesitation, awkward pacing, poor function calling, and the inability to stay coherent across longer conversations. Google is positioning Gemini 3.1 Flash Live as an answer to exactly those problems, emphasizing lower latency, better conversational rhythm, and stronger task completion in messy, real-world audio environments.
Benchmarks and product claims
Google says Gemini 3.1 Flash Live leads on ComplexFuncBench Audio with a score of 90.8%, a benchmark designed around multi-step function calling with constraints. On Scale AI’s Audio MultiChallenge, Google reports a score of 36.1% with thinking enabled, highlighting complex instruction following and long-horizon reasoning during interruptions and hesitations. Those are the kinds of conditions that typically break voice assistants once a session moves past short, clean prompts.
The company also says the model has improved tonal understanding, which affects how well it interprets pace, pitch, frustration, and confusion in the user’s voice. In Google’s description, that makes the model better suited both for enterprise customer experience systems and for direct consumer products. Google adds that Gemini Live now supports more than 200 countries, which gives the rollout a broader distribution story than a developer-only launch.
Why it matters
For developers, the significance is that Google is trying to close the gap between a demo-quality speech model and a production-grade voice interface. Better function calling, stronger reasoning under interruption, and lower latency all point to voice agents that can actually perform work rather than just answer simple spoken queries. That could matter for search, customer support, and hands-free agent workflows where the model has to stay responsive while still using tools correctly.
For the market more broadly, Gemini 3.1 Flash Live is another signal that audio is no longer a side feature around LLMs. It is becoming a first-class surface where reasoning quality, multimodal robustness, and global rollout matter together. Google is not just shipping another voice demo here. It is trying to normalize live, natural dialogue as a default interface across search, assistant, and developer platforms.
Related Articles
Google introduced Gemini 3.1 Flash Live on Mar 26, 2026 as its new real-time audio model for developers, enterprises, and consumer products. The release ties together the Gemini Live API, Gemini Enterprise for Customer Experience, Search Live, and Gemini Live around a single lower-latency voice stack.
Google AI shared practical Gemini 3.1 Flash-Lite examples, including high-volume image sorting and business automation scenarios. The thread also points developers to preview access via Gemini API, Google AI Studio, and Vertex AI.
Google has put Gemini Embedding 2 into public preview through the Gemini API and Vertex AI. The model is Google’s first natively multimodal embedding system, combining text, image, video, audio, and document inputs in one embedding space.
Comments (0)
No comments yet. Be the first to comment!