Anthropic introduces Claude Code Security, a feature built into Claude Code on the web that is now available in a limited research preview. What does it do? It scans codebases for vulnerabilities and suggests targeted patches for human review, helping teams find and fix flaws that traditional tools often miss.
What is Claude Code Security
Is it just another static scanner? Not at all. While classic tools hunt for known patterns, Claude Code Security tries to read and reason about code the way a human security researcher would: it understands how components interact, traces data flow, and spots contextual and logical vulnerabilities that tend to slip through.
That doesn't mean it applies patches automatically. Every finding goes through a multi-stage verification process: Claude rechecks its own results, tries to prove or disprove each issue, filters out false positives, and assigns a severity rating. Validated findings show up in a panel where your team can review them, inspect the suggested patches, and approve fixes.
How it works in practice
- Reading and reasoning: instead of just pattern matching, the model traces how information moves inside the application.
- Internal verification: Claude tries to validate its own hypotheses to reduce noise.
- Intelligent prioritization: each finding comes with a severity rating and a confidence score so you can focus on what matters.
- Human review: nothing is applied without your approval; Claude proposes, you decide.
Why it matters now
Organizations face too many vulnerabilities and not enough people to fix them. Rule-based tools help, but they struggle with complex business-logic bugs or poorly designed access controls. So what changes with AI? It can surface novel, high-severity issues that went unnoticed for years.
Anthropic reports that with Claude Opus 4.6 their team found more than 500 vulnerabilities in production open-source projects—bugs that slipped past previous human reviews. They're working on triage and responsible disclosure with maintainers.
Risks and safeguards
It's reasonable to ask: if AI helps find vulnerabilities, couldn't it also help attackers? That's a valid concern. Anthropic is putting this capability into the hands of defenders, limiting access for now and applying responsible deployment controls. The tool also builds in steps to cut down false positives and always leaves the final decision to human developers.
Access and next steps
Claude Code Security is in a limited research preview for Enterprise and Team customers. Open-source maintainers can request accelerated, free access to collaborate on refining the tool.
If you want more details or to sign up, Anthropic suggests visiting the product page: claude.com/solutions/claude-code-security.
We're at a tipping point in software security: models are already finding bugs that were invisible before, and that can shift the balance for attackers and defenders alike. The difference comes down to speed and human decisions; tools like Claude Code Security aim to put that advantage on the side of people protecting code.
