Google introduced Gemini 3.1 Flash Live on Mar 26, 2026 as its new real-time audio model for developers, enterprises, and consumer products. The release ties together the Gemini Live API, Gemini Enterprise for Customer Experience, Search Live, and Gemini Live around a single lower-latency voice stack.
#gemini
RSS FeedGoogle DeepMind said on March 26, 2026 that Gemini 3.1 Flash Live is rolling out in preview via the Live API in Google AI Studio. Google’s blog says the model is designed for real-time voice and vision agents, improves tool triggering in noisy environments, and supports more than 90 languages for multimodal conversations.
Google introduced Gemini 3.1 Flash-Lite on Mar 03, 2026 as its fastest and lowest-cost Gemini 3 series model. The preview release targets high-volume developer workloads with lower pricing, faster latency, and stronger benchmark scores than the prior 2.5 Flash tier.
Show HN users were drawn to SentrySearch because it turns Gemini Embedding 2's native video embeddings into a practical CLI for semantic search and clip extraction.
On Feb. 12, 2026, Google announced a major Gemini 3 Deep Think upgrade for science, research, and engineering. The new version is available in the Gemini app for Google AI Ultra subscribers and, for the first time, via early API access for researchers, engineers, and enterprises.
On Feb. 19, 2026, Google introduced Gemini 3.1 Pro and began rolling it out across AI Studio, Gemini CLI, Antigravity, Android Studio, Vertex AI, Gemini Enterprise, the Gemini app, and NotebookLM. Google says the model reached 77.1% on ARC-AGI-2, more than doubling Gemini 3 Pro’s reasoning performance on that benchmark.
Google has introduced Gemini 3.1 Flash-Lite in preview through Google AI Studio and Vertex AI. The company is positioning it as the fastest and most cost-efficient model in the Gemini 3 family for large-scale inference jobs.
Google AI Studio promoted Gemini Embedding 2 in a March 12, 2026 X post, and Google’s March 10 blog post says the model maps text, images, video, audio, and documents into a single embedding space. Google says it is in public preview through the Gemini API and Vertex AI and is designed for multimodal retrieval and classification.
Google AI Studio said in a March 19, 2026 post on X that its vibe coding workflow now supports multiplayer collaboration, live data connections, persistent builds, and shadcn, Framer Motion, and npm support. The update pushes AI Studio closer to a browser-based app-building environment instead of a prompt-only prototype tool.
On March 12, 2026, Google introduced Groundsource, a Gemini-powered method that converted public reports and Google Maps signals into a dataset covering more than 2.6 million historical flood events across 150 countries. Google says the resulting model can forecast urban flash floods up to 24 hours in advance and is now available in Flood Hub.
Google updated Gemini across Docs, Sheets, Slides, and Drive to generate first drafts, build spreadsheets and presentations, and surface cited answers from Drive. The company also said Gemini in Sheets reached 70.48% on SpreadsheetBench.
Google said on March 19, 2026 that Google AI Studio now offers a full-stack vibe-coding experience powered by the Antigravity coding agent and Firebase integrations. The company says Build mode can generate multiplayer apps, manage server-side logic, store secrets securely, and wire up Google Maps and authentication flows from natural-language prompts.