Anthropic Adds AI Code Review Agents to Claude Code

Anthropic has launched an AI-powered code review tool in Claude Code that automatically analyses pull requests, helping engineering teams manage the surge in AI-generated code.

author-image
Manisha Sharma
New Update
AI Code Review Agents

Anthropic has introduced an artificial intelligence-based code review tool within its Claude Code platform, aiming to help engineering teams manage the rising volume of software submissions generated by AI-assisted development. The feature, called Code Review, automatically analyses pull requests before code is merged into repositories.

Advertisment

The company said the tool is available as a research preview for team and enterprise users. It can be activated across selected repositories and is designed to evaluate code changes using a system of AI agents that work in parallel to detect bugs and potential issues.

Screenshot 2026-03-10 113933

The launch reflects a broader shift in software development workflows. As AI coding assistants accelerate the pace of development, engineering teams are increasingly dealing with larger volumes of code submissions. That shift has begun to expose a bottleneck in traditional review processes, where human reviewers must validate code quality before deployment.

AI-Generated Code Is Increasing Review Pressure

AI-powered coding tools have made it easier for developers to generate and edit code quickly. While this improves productivity, it also increases the number of pull requests engineers must review before code can be integrated into production systems. Anthropic said this surge often leaves human reviewers with limited time to inspect each submission carefully. As a result, code may be approved without thorough examination, increasing the risk of bugs or vulnerabilities entering production environments.

The company said its code review tool was developed to address this issue by automating part of the review process. Instead of relying solely on human reviewers, organisations can use AI to analyse pull requests and flag potential issues before code is approved. According to Anthropic, the system performs what it describes as “deep, multi-agent reviews that catch bugs human reviewers often miss themselves".

Multi-Agent Review Process

The tool operates using multiple AI agents that analyse code simultaneously. When a pull request is submitted, Code Review deploys several agents that search for bugs in parallel, verify findings to reduce false positives, and rank issues based on severity.

The level of scrutiny depends on the size and complexity of the code change. Larger pull requests are assigned more agents and undergo deeper analysis, while smaller updates receive a lighter review.

Advertisment

Anthropic said a typical review can take up to 20 minutes to complete. Once the analysis is finished, the results are posted directly in the pull request thread. Developers receive a summary comment outlining the most significant issues alongside inline comments highlighting specific sections of code where problems were detected.

This approach allows engineering teams to see potential issues within the same environment where code collaboration already occurs.

Integration With GitHub Workflows

To enable code review, administrators must first activate the feature within Claude Code settings and install the Claude Code GitHub application. After that, they can select the repositories where automated reviews should take place.

Once enabled, the system automatically reviews all new pull requests submitted to those repositories. Developers do not need to manually trigger the process, and no additional configuration is required for individual submissions.

Anthropic said a similar system is already used internally by the company to review nearly every pull request submitted within its own development workflows. The external release suggests the company is extending tools developed for internal engineering practices to enterprise customers.

Cost Model Based on Token Usage

Anthropic said the tool is billed based on token consumption, reflecting the computational resources required to analyze each pull request. According to the company, the average cost of a review ranges between $15 and $25, depending on the size and complexity of the code being evaluated.

Advertisment

Organisations can control usage by setting monthly spending limits, enabling reviews only for selected repositories, and monitoring performance through an analytics dashboard provided within the platform.

Anthropic noted that Code Review is designed to be more thorough than the open-source Claude Code GitHub action, which performs simpler automated checks. The additional depth of analysis means the tool also requires more computing resources.

Expanding AI’s Role in Software Development

The introduction of code review highlights how AI tools are moving beyond code generation into other stages of the software development lifecycle. Over the past two years, developers have increasingly adopted AI assistants to help write, refactor, and debug code. However, as these systems accelerate development speed, they also create new challenges around quality control. Automated review tools represent one attempt to maintain code reliability as development cycles shorten.

Advertisment

Anthropic’s multi-agent approach reflects a broader trend in AI design, where specialised agents collaborate to complete complex tasks. In the context of code review, different agents can examine various aspects of a pull request simultaneously, such as logic errors, potential security vulnerabilities, or implementation flaws. For engineering teams working with large codebases, this type of automated review could provide an additional layer of verification before human reviewers approve a change.

Addressing Risks in Modern Development Pipelines

The growing reliance on AI-generated code has raised questions about how organisations ensure quality and security in increasingly automated development pipelines. Bugs introduced through rushed reviews or incomplete testing can have serious consequences, particularly in critical software systems. By integrating automated analysis directly into pull request workflows, companies hope to detect issues earlier and reduce the burden on engineering teams.

Anthropic’s new tool represents one approach to addressing that challenge. Instead of replacing human reviewers, the system is intended to act as an additional reviewer that scans code for issues before deployment. The company said Code Review is currently available as a research preview, suggesting the system may continue to evolve as organisations begin using it within production development environments.

Advertisment

Anthropic has been expanding its developer-focused tools around the Claude family of AI models. Claude Code is designed to assist developers with coding tasks and integrate with existing engineering workflows. With the introduction of code review, the platform now extends into another stage of software development: validating code before it becomes part of production systems.

As AI-assisted development becomes more common across the industry, tools that help manage quality, reliability, and security are likely to become an important part of enterprise engineering environments. For organisations already relying on AI to generate code, automated review systems may help ensure that speed gains do not come at the expense of stability or security.