Particle.news
Download on the App Store

Google Releases TranslateGemma, an Open Suite of Efficient Translation Models

A two-stage training pipeline delivers benchmark gains across 55 languages.

Overview

  • The suite ships in 4B, 12B, and 27B parameter sizes built on Gemma 3 and is available now via Hugging Face, Kaggle, the Gemma Cookbook, and Vertex AI.
  • Google reports the 12B model surpasses the Gemma 3 27B baseline on WMT24++ as measured by MetricX, with consistent gains across all sizes and human evaluations on WMT25 pairs.
  • Training combines supervised fine-tuning on human and high-quality synthetic parallel data with a reinforcement learning phase guided by MetricX-QE and AutoMQM reward models.
  • The models retain multimodal capability, showing improved performance on the Vistra image translation benchmark for text-in-image tasks.
  • Designed for broad deployment, the 4B model targets mobile and edge use, the 12B model is intended for consumer laptops, and the 27B model is tuned for single-GPU or TPU cloud inference, with nearly 500 additional language pairs prepared for community exploration.