Google launches Gemma 4 open models with 140 languages, 400M downloads

Gemma 4 is Google’s most capable open models yet

By
Geo News Digital Desk
|
Google launches Gemma 4 open models with 140 languages, 400M downloads
Google launches Gemma 4 open models with 140 languages, 400M downloads

Google DeepMind released Gemma 4 on Wednesday, April 1.

This marks the most intelligent open model of Google yet designed for advanced reasoning and agentic workflows under a permissive Apache 2.0 license.

Google introduced four versatile sizes, including effective 2B (E2B), effective 4B (E4B), a 26B Mixture of Experts (MoE), and a 31B Dense model.

For now, the 31B is ranked as the third-best open model globally on the Arena AI text leaderboard.

Moreover, Google reports that the 26B model holds the sixth spot, outperforming models 20 times its size.

In the official blog post, the VP of Research at Google DeepMind wrote: “Gemma 4 delivers an unprecedented level of intelligence-per-parameter.”

Since the first Gemma model was released, the models have been downloaded over 400 million times, creating a "Gemmaverse" of over 100,000 variants.

The new models support native function calls, structured JSON output, and system commands, allowing the creation of autonomous agents that can interact with tools and APIs.

All models support native video, image, and text processing; the E2B and E4B models support native audio input for speech recognition.

The model supports more than 140 languages and provides context windows of up to 256K tokens for larger models, enabling developers to process entire code repositories or long documents in a single prompt.

E2B and E4B edge-focused models are optimized for mobile and IoT devices and run entirely offline on phones, Raspberry Pi, and NVIDIA Jetson Orin Nano with near-zero latency. Google has worked with Qualcomm and MediaTek on mobile optimizations in collaboration with the Pixel team.

Users can access the models on Hugging Face, Kaggle, Ollama, and Google AI Studio.