Anthropic Launches AI Tool That Hunts Software Vulnerabilities

Anthropic's new AI tool autonomously hunts and patches software vulnerabilities, disrupting the cybersecurity market and causing stock declines.

Feb 21, 2026
5 min read
Set Technobezz as preferred source in Google News
Technobezz
Anthropic Launches AI Tool That Hunts Software Vulnerabilities

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Cybersecurity stocks plunged after Anthropic unveiled an AI tool that autonomously hunts and patches software vulnerabilities, signaling direct competition between generative AI and traditional security vendors.

JFrog shares dropped 24 percent, Okta fell over 9 percent, and CrowdStrike declined 8 percent following the announcement earlier this week. GitLab also lost more than 8 percent, while Zscaler, Rubrik, and Palo Alto Networks saw notable declines as investors reacted to potential disruption in the $2.5 billion AI coding market.

Claude Code Security scans codebases for vulnerabilities and suggests targeted patches for human review, aiming to catch flaws that conventional methods often miss. The tool is built into Claude Code on the web and currently available in a limited research preview for Enterprise and Team customers.

"We're releasing it as a limited research preview to Enterprise and Team customers, with expedited access for maintainers of open-source repositories,"

Anthropic stated, adding that the phased rollout will help refine capabilities while ensuring responsible deployment.

The autonomous vulnerability hunting arrives as Claude Code has grown from an internal experiment to a multibillion-dollar business reaching $2.5 billion in annualized revenue within a year of commercial launch. Originally developed by Boris Cherny in an experimental division likened to Bell Labs, the tool gained organic adoption among Anthropic engineers before its public release.

"I remember Dario asking, like, 'Hey, are you forcing engineers to use this? Why is everyone using it?'"

Cherny recalled about CEO Dario Amodei's surprise at early adoption rates.

The security implications of agentic AI tools present new challenges that Anthropic detailed in a June 2025 security assessment. Claude Code operates with system-level permissions that could be exploited through prompt injection attacks, where malicious instructions embedded in documentation or code comments manipulate the AI's behavior.

Anthropic's defense strategy includes a permission system requiring user approval for potentially dangerous operations and model training to recognize injection attempts. However, the company acknowledges prompt injection remains an unsolved problem in AI safety research.

Enterprise customers receive recommendations to run Claude Code in sandboxed environments and avoid granting broad auto-approval permissions when processing untrusted content like open-source repositories.

The market reaction reflects investor assessment of generative AI's direct competition with traditional security vendors. This shift is underscored by Claude Code reaching $2.5 billion in annualized revenue within just one year.

Share this article

Help others discover this content