Google DeepMind says Gemma 4 passed 10M downloads in its first week

Original: Gemma 4 punches above its weight, outperforming models 10x its size without the need for massive compute. With 10M+ downloads in its first week and 500M+ for the Gemma family overall, we’re excited to see this level of engagement within the open research community. View original →

Read in other languages: 한국어日本語
LLM Apr 9, 2026 By Insights AI 1 min read 1 views Source

On April 9, 2026, Google DeepMind said in an X post that Gemma 4 passed 10M downloads in its first week and that the Gemma family overall has crossed 500M downloads. The company described Gemma 4 as a model family that “punches above its weight,” claiming performance that beats models 10x its size without requiring massive compute. In the accompanying Google blog post, Google calls Gemma 4 its most capable open models to date and says they were built for advanced reasoning and agentic workflows.

The technical positioning is broader than a single checkpoint. Google says Gemma 4 ships in four sizes: E2B, E4B, 26B MoE, and 31B Dense. The larger models are pitched as state-of-the-art for their class, while the smaller E2B and E4B variants are designed for multimodal and low-latency use on edge and mobile hardware. Google also says the family is sized to run and fine-tune across a wide hardware range, from Android devices and laptop GPUs to developer workstations and accelerators. The blog adds that the broader Gemmaverse has already grown to more than 100,000 variants.

Open-model momentum, not just another release

The 10M first-week figure matters because it suggests Gemma is becoming part of the default toolchain for developers who want an open model that is small enough to deploy locally but capable enough for reasoning-heavy workloads. Combined with Google’s claims about leaderboard performance and the fast growth in community variants, the post reads less like a routine adoption milestone and more like evidence that Google has found real distribution for its open-model strategy. For the wider market, that raises the pressure on other model vendors to pair strong benchmarks with strong downstream usability.

Share: Long

Related Articles

LLM Reddit 5d ago 2 min read

A post in r/artificial pointed readers to Google DeepMind's Gemma 4 release, which packages advanced reasoning and agentic features under Apache 2.0. Google says the family spans four sizes, supports up to 256K context in larger models, and ships with day-one ecosystem support from Hugging Face to llama.cpp.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.