Particle.news

Download on the App Store

Samsung Releases 7M-Parameter Recursive AI That Rivals Larger Models on Reasoning Benchmarks

The open-source TRM refines answers through recursion to prioritize efficient reasoning over brute-force scale.

Overview

  • Samsung SAIL Montreal published the Tiny Recursive Model with an arXiv paper and released full code and training details on GitHub under the MIT license.
  • TRM iteratively feeds its outputs back as inputs and uses deep supervision plus adaptive halting to improve predictions over multiple passes.
  • Reported scores include 87.4% on Sudoku-Extreme, 85% on Maze-Hard, 45% on ARC-AGI-1, and 8% on ARC-AGI-2.
  • Coverage and the authors state the model matches or surpasses much larger LLMs such as Google’s Gemini 2.5 Pro, OpenAI’s o3-mini, and DeepSeek R1 on these structured puzzles.
  • At 7 million parameters, the system can run on commodity hardware with lower energy use, though it has not been evaluated on open-ended language or perceptual tasks.