Particle.news

Download on the App Store

Some AI Language Models Emit Up to 50 Times More CO2 Than Others

Emissions rise sharply as models generate more reasoning tokens, a trade-off that users can manage by opting for concise responses.

Image
Fancier 'reasoning’ chatbots are bulking up AI’s energy demand.
Image
Image

Overview

  • Researchers at Hochschule München University of Applied Sciences evaluated 14 models with 7 to 72 billion parameters on 1,000 benchmark questions, finding reasoning-enabled systems produce up to 50 times more CO2 than concise-response models.
  • None of the models that kept emissions below 500 grams of CO2 equivalent achieved over 80% accuracy, underscoring an inherent accuracy-sustainability trade-off in LLM technologies.
  • Queries on complex subjects such as abstract algebra and philosophy triggered up to six times higher emissions than simpler topics due to extended internal reasoning processes.
  • Local energy grid mixes and underlying hardware significantly influence a model’s carbon footprint, suggesting that emission levels may vary across regions and setups.
  • Users can curb their AI carbon impact by selecting more efficient models, limiting high-capacity LLM use to essential tasks and requesting concise outputs.