/ciol/media/media_files/2026/01/16/translate-gemma-2026-01-16-14-38-04.png)
Google's AI ambitions are accelerating at breakneck speed. In 2025 and early 2026 alone, the company launched Gemma 3, its latest family of open-weight foundation models, introduced MedGemma for healthcare diagnostics, and rolled out specialized versions for coding and on-device use. And now, it has released another AI model for open translation called TranslateGemma, which can translate across languages on any device accessible from anywhere in the world.
The search giant said it has trained and evaluated the translation model on 55 language pairs for reliability and quality performance across major languages, such as Spanish, French, Chinese, and Hindi, as well as many low-resource languages.
Beyond its core languages, TranslateGemma was trained on nearly 500 additional language pairs. It is designed as a strong base model that researchers can fine-tune for specific languages or improve translations for low-resource languages.
Mobile, Laptop, and Cloud
The model is built on Gemma 3 in small, medium, and large versions to suit different needs and computing power. It is offered in three sizes: a 4B model for mobile and edge devices, a 12B model that can run on consumer laptops for local development, and a 27B model aimed at high-accuracy use cases on cloud hardware such as a single H100 GPU or TPU.
What makes TranslateGemma stand out is its efficiency. The 12B model outperforms the larger Gemma 3 27B baseline model while using less than half the parameters, delivering high-quality translations with better throughput and lower latency. Similarly, the 4B model rivals the performance of the 12B baseline, making it powerful enough for mobile devices.
Google achieved this through a two-stage training process. First, the company used supervised fine-tuning on parallel data that included both human translations and high-quality synthetic translations from Gemini models. Then, it applied reinforcement learning using reward models like MetricX-QE and AutoMQM to refine translation accuracy and naturalness.
The models also retain Gemma 3's multimodal capabilities, meaning they can translate text within images without needing separate training for that task.
Google said that, "The release of TranslateGemma provides researchers and developers with powerful and adaptable tools for a wide array of translation-related tasks..."
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us