Claude Code Adds Multi-Agent Code Review for Team and Enterprise
Original: Introducing Code Review, a new feature for Claude Code. View original →
On March 9, 2026, Claude said on X that Claude Code now includes Code Review, a new feature that dispatches a team of agents on every pull request to look for bugs. Anthropic says the feature is in research preview for Team and Enterprise and is based on the same style of internal review workflow the company already uses on most of its own PRs.
According to Anthropic, the system is designed for depth rather than speed. When a PR opens, multiple agents review the change in parallel, verify suspected issues to filter out false positives, rank what they find by severity, and then post a single high-signal summary plus inline comments. Anthropic says the amount of review scales with the PR, so small changes get a lighter pass while larger or riskier diffs get more agents and deeper inspection.
The company also published internal usage data. Anthropic said substantive review comments were appearing on 16% of PRs before this system and on 54% after deployment. On PRs with more than 1,000 changed lines, 84% receive findings, averaging 7.5 issues. On PRs under 50 lines, 31% receive findings, averaging 0.5 issues. Anthropic also says engineers mark fewer than 1% of findings as incorrect.
The X thread added two practical details for buyers: reviews generally average about 20 minutes, and pricing during the beta period averages $15-25 per review, billed on token usage. Anthropic positioned Code Review as a more thorough and more expensive option alongside its existing open-source Claude Code GitHub Action. The product overview is available in Claude's Code Review announcement.
Related Articles
Anthropic introduced Claude Sonnet 4.6 on February 17, 2026, adding a beta 1M token context window while keeping API pricing at $3/$15 per million tokens. The company says the new default model improves coding, computer use, and long-context reasoning enough to cover more work that previously pushed users toward Opus-class models.
Microsoft Research introduced CORPGEN on February 26, 2026 to evaluate and improve agent performance in realistic multi-task office scenarios. The framework reports up to 3.5x higher task completion than baseline systems under heavy concurrent load.
Microsoft Research introduced CORPGEN on February 26, 2026 to evaluate and improve agent performance in realistic multi-task office scenarios. The framework reports up to 3.5x higher task completion than baseline systems under heavy concurrent load.
Comments (0)
No comments yet. Be the first to comment!