Claude Code Adds Multi-Agent Code Review for Team and Enterprise
Original: Introducing Code Review, a new feature for Claude Code. View original →
On March 9, 2026, Claude said on X that Claude Code now includes Code Review, a new feature that dispatches a team of agents on every pull request to look for bugs. Anthropic says the feature is in research preview for Team and Enterprise and is based on the same style of internal review workflow the company already uses on most of its own PRs.
According to Anthropic, the system is designed for depth rather than speed. When a PR opens, multiple agents review the change in parallel, verify suspected issues to filter out false positives, rank what they find by severity, and then post a single high-signal summary plus inline comments. Anthropic says the amount of review scales with the PR, so small changes get a lighter pass while larger or riskier diffs get more agents and deeper inspection.
The company also published internal usage data. Anthropic said substantive review comments were appearing on 16% of PRs before this system and on 54% after deployment. On PRs with more than 1,000 changed lines, 84% receive findings, averaging 7.5 issues. On PRs under 50 lines, 31% receive findings, averaging 0.5 issues. Anthropic also says engineers mark fewer than 1% of findings as incorrect.
The X thread added two practical details for buyers: reviews generally average about 20 minutes, and pricing during the beta period averages $15-25 per review, billed on token usage. Anthropic positioned Code Review as a more thorough and more expensive option alongside its existing open-source Claude Code GitHub Action. The product overview is available in Claude's Code Review announcement.
Related Articles
This is a distribution story, not just a usage milestone. OpenAI says Codex grew from more than 3 million weekly developers in early April to more than 4 million two weeks later, and it is pairing that demand with Codex Labs plus seven global systems integrators to turn pilots into production rollouts.
Why it matters: AI agents are moving from chat demos into delegated economic work. In Anthropic’s office-market experiment, 69 agents closed 186 deals across more than 500 listings and moved a little over $4,000 in goods.
AnthropicAI highlighted an Engineering Blog post on March 24, 2026 about using a multi-agent harness to keep Claude productive across frontend and long-running software engineering tasks. The underlying Anthropic post explains how initializer agents, incremental coding sessions, progress logs, structured feature lists, and browser-based testing can reduce context-window drift and premature task completion.
Comments (0)
No comments yet. Be the first to comment!