Overview
- The 270 million-parameter model debuted August 14 with both pretrained and instruction-tuned versions plus Quantization-Aware Training for INT4 precision.
- Internal tests on a Pixel 9 Pro show the INT4-quantized model used just 0.75 percent of battery over 25 conversations, underscoring its on-device efficiency and privacy benefits.
- Instruction-tuned Gemma 3 270M scored 51.2 percent on the IFEval benchmark, outpacing similar-sized models though competitors noted omitted third-party comparisons.
- Designed for rapid fine-tuning, the compact architecture supports fleets of specialized models for high-volume, well-defined tasks on smartphones, Raspberry Pis and in browsers.
- Available via Hugging Face, Ollama, Kaggle, LM Studio, Docker and Vertex AI under Gemma Terms of Use that allow broad commercial use while enforcing prohibited-use restrictions.