Why it matters: retrieval stacks are being pulled from text-only search into multimodal memory. Google AI Studio said Gemini Embedding 2 is generally available and covers text, image, video, audio, and documents through one model path.
#embeddings
RSS FeedGoogle Cloud Tech highlighted BigQuery’s autonomous embedding generation preview on April 10, 2026, positioning it as a way to keep vector data in sync without separate ETL glue. The documentation shows automatically maintained embedding columns backed by Vertex AI models, plus a preview built-in model path inside BigQuery.
Show HN users were drawn to SentrySearch because it turns Gemini Embedding 2's native video embeddings into a practical CLI for semantic search and clip extraction.
Google AI Studio promoted Gemini Embedding 2 in a March 12, 2026 X post, and Google’s March 10 blog post says the model maps text, images, video, audio, and documents into a single embedding space. Google says it is in public preview through the Gemini API and Vertex AI and is designed for multimodal retrieval and classification.
Google DeepMind said on X that Gemini Embedding 2 is now in preview through the Gemini API and Vertex AI. The model is positioned as the first fully multimodal embedding model built on the Gemini architecture, aiming to unify retrieval across text, images, video, audio, and documents.
Google put Gemini Embedding 2 into public preview on March 10, 2026. The company says the model handles text, images, and mixed multimodal documents in one embedding space while improving benchmark scores to 68.32 for text and 53.3 for image tasks without changing price or vector dimensions.
Google has put Gemini Embedding 2 into public preview through the Gemini API and Vertex AI. The model is Google’s first natively multimodal embedding system, combining text, image, video, audio, and document inputs in one embedding space.
Perplexity says its API stack now spans agent orchestration, real-time search, embeddings, and an upcoming sandbox under one platform. The update packages more of the agent runtime into Perplexity infrastructure instead of leaving developers to assemble separate providers.
Google AI Developers says Gemini Embedding 2 is now in preview via the Gemini API and Vertex AI. Google describes it as its first fully multimodal embedding model on the Gemini architecture and its most capable embedding model so far.
Perplexity announced on February 26, 2026 that `pplx-embed-v1` and `pplx-embed-context-v1` are now available in 0.6B and 4B variants. The company positions the release as retrieval-first infrastructure with quantized embeddings and benchmark-focused performance claims.