Technology

Anthropic adds automated code review to Claude Code

pull requests surge as AI writes faster than humans can verify, quality control becomes a metered service

Images

Rebecca Bellan Rebecca Bellan techcrunch.com

Anthropic has added an automated “Code Review” feature to Claude Code, aiming at teams now drowning in pull requests generated by AI-assisted programming. The tool, launched in a research preview for Claude for Teams and Claude for Enterprise, integrates with GitHub and comments directly on proposed changes, tagging issues by severity with a color system, according to TechCrunch.

The pitch is simple: if AI tools can turn a short prompt into a large code change, the limiting factor shifts from writing to verifying. Cat Wu, Anthropic’s head of product, told TechCrunch that enterprise customers are asking how to review the “sheer amount” of pull requests Claude Code helps produce. In practice, the product reframes code review from a social process—where a human reviewer is accountable for what merges—to a pipeline step that can be purchased, metered, and scaled.

Anthropic says the system focuses on logical errors rather than style, and explains its reasoning step by step. Under the hood, it uses multiple agents in parallel to examine different “dimensions” of a codebase, with a final agent aggregating and ranking findings. That design is also what makes it expensive: pricing is token-based, and Wu estimated an average review cost of $15 to $25 depending on complexity. For a team that has already increased its pull-request volume through AI, the new cost center is not compute for generation but compute for validation.

The second-order effects are less about any single bug and more about how teams behave when a green checkmark becomes a default. If the reviewer is a service that runs on every engineer’s pull request, the temptation is to ship more changes with thinner human scrutiny, because the tool has “looked.” Meanwhile, the tool’s most valuable outputs—risk classification, policy checks, and the ability to enforce house rules at scale—tend to concentrate control in whoever configures it: engineering leadership, security teams, or compliance.

TechCrunch notes that Code Review includes “light” security analysis and can be customized for internal best practices, while Anthropic positions a separate product, Claude Code Security, for deeper security work. That split mirrors how large organizations already buy software: first a productivity tool, then a governance layer to manage what that tool enables. Once the review process itself is mediated by a vendor, the boundary between software quality and corporate policy enforcement becomes a product setting.

Anthropic is rolling out Code Review first to paying enterprise tiers, at a moment when it is leaning harder on that business. TechCrunch reports that Claude Code’s run-rate revenue has surpassed $2.5 billion since launch.

The immediate problem is pull requests arriving faster than humans can read them. The proposed solution is a second AI system that reads them first—and charges per read.