Particle.news

Anthropic Launches Multi-Agent Code Review for Claude Code

The research-preview tool auto-analyzes GitHub pull requests with depth-focused, token-billed reviews to ease PR backlogs driven by AI-generated code.

Overview

  • Available now for Claude Code Teams and Enterprise, admins enable it per repository to run automatically on pull-request open events, with spending caps and analytics to manage scope and cost.
  • Multiple specialized agents examine code changes in the context of the full codebase, verify findings to reduce false positives, rank issues by severity, and leave inline comments, while final approval remains with human reviewers.
  • Anthropic reports internal gains: substantive review comments rose from 16% to 54%, 84% of large PRs (>1,000 lines) yielded findings averaging 7.5 issues, and fewer than 1% of surfaced issues were marked incorrect.
  • Reviews are billed by token usage and typically cost $15–$25 and take about 20 minutes, reflecting a depth-first approach that is pricier but more thorough than Anthropic’s lighter open-source GitHub Action.
  • Early reaction is mixed, with users praising bug catches in internal and customer examples while others question costs and potential effects on senior reviewer roles; Anthropic says the heavier compute targets harder-to-find bugs and offers only light security checks alongside its separate Claude Code Security product.