Google DeepMind said on March 26, 2026 that it is releasing research on how conversational AI might exploit emotions or manipulate people into harmful choices. The company says it built the first empirically validated toolkit to measure harmful AI manipulation, based on nine studies with more than 10,000 participants across the UK, the US, and India.
#google-deepmind
RSS FeedGoogle DeepMind said on March 26, 2026 that Gemini 3.1 Flash Live is rolling out in preview via the Live API in Google AI Studio. Google’s blog says the model is designed for real-time voice and vision agents, improves tool triggering in noisy environments, and supports more than 90 languages for multimodal conversations.
Google DeepMind and Agile Robots said on March 24, 2026 that they will combine Gemini Robotics foundation models with Agile Robots hardware to build adaptable, reasoning robots for industrial environments. The partnership starts with high-value manufacturing and automation use cases where scale and reliability matter most.
Google DeepMind said on X on March 12, 2026 that a new podcast for AlphaGo’s tenth anniversary explores how methods first sharpened in games now feed into scientific discovery. The post lines up with DeepMind’s March 10 essay arguing that AlphaGo’s search, planning, and reinforcement ideas now influence work in biology, mathematics, weather, and algorithms.
Google DeepMind published a new framework for evaluating progress toward AGI on March 17, 2026. The proposal tries to shift the discussion from single benchmark scores toward a structured map of human-like cognitive capabilities.
Google DeepMind said on March 17, 2026 that it has published a new cognitive-science framework for evaluating progress toward AGI and launched a Kaggle hackathon to turn that framework into practical benchmarks. The proposal defines 10 cognitive abilities, recommends comparison against human baselines, and puts $200,000 behind community-built evaluations.
Google DeepMind said on X that it is launching a Kaggle hackathon with $200,000 in prizes to build new cognitive evaluations for AI. The linked Google post says the effort is part of a broader framework for measuring AGI progress across 10 cognitive abilities rather than a single benchmark.
Google DeepMind said on X that it is expanding AlphaFold Database with millions of AI-predicted protein complex structures in collaboration with EMBL-EBI, NVIDIA, and Seoul National University. The release pushes AlphaFold beyond single-protein structure prediction toward a broader public resource for studying how proteins interact.
Google DeepMind said on X that Gemini Embedding 2 is now in preview through the Gemini API and Vertex AI. The model is positioned as the first fully multimodal embedding model built on the Gemini architecture, aiming to unify retrieval across text, images, video, audio, and documents.
Google DeepMind said on February 11, 2026 that Gemini Deep Think is now helping tackle professional problems in mathematics, physics, and computer science under expert supervision. The company tied the claim to two fresh papers, a research agent called Aletheia, and examples ranging from autonomous math results to work on algorithms, optimization, economics, and cosmic-string physics.
Google DeepMind announced Gemini 3.1 Flash-Lite on X on March 3, 2026. According to Google’s official post, the model is launching in preview with low per-token pricing and a speed-focused profile for high-volume developer workloads.
Google DeepMind announced Gemini 3.1 Flash-Lite on X on March 3, 2026 (UTC), calling it the most cost-efficient Gemini 3 model. Google’s companion blog post published pricing, latency claims, benchmark references, and preview availability in AI Studio and Vertex AI.