Anthropic Launches Claude Opus 4.7 AI Model with Improved Technical Performance

Anthropic's new Claude Opus 4.7 AI model offers enhanced technical performance, outperforming rivals on key benchmarks while maintaining accessible pricing.

Apr 16, 2026
4 min read
Set Technobezz as preferred source in Google News
Technobezz
Anthropic Launches Claude Opus 4.7 AI Model with Improved Technical Performance

Don't Miss the Good Stuff

Get tech news that matters delivered weekly. Join 50,000+ readers.

Anthropic is shipping a deliberately limited AI model that still outperforms OpenAI and Google's latest offerings on most technical benchmarks. The company announced Claude Opus 4.7 today, describing it as "broadly less capable" than its top-tier Claude Mythos Preview but superior to previous public models in software engineering, instruction following, and real-world task completion.

The new model maintains the same $5 per million input tokens pricing as its predecessor Opus 4.6 while delivering significant performance gains across key technical tests. On agentic coding benchmarks (SWE-bench Pro), Opus 4.7 scores 64.3%, up from 53.4% for Opus 4.6 and ahead of GPT-5.4's 57.7% and Gemini 3.1 Pro's 54.2%.

It leads both competitors on scaled tool use (77.3% versus GPT-5.4's 68.1%) but trails slightly on graduate-level reasoning (Opus 4.7: 94.2%, GPT-5.4 Pro: 94.4%, Gemini 3.1 Pro: 94.3%).

What makes this release strategically notable is what Anthropic isn't shipping: the company's most powerful model, Claude Mythos Preview, remains restricted to select companies through Project Glasswing, a cybersecurity initiative launched earlier this month that limits access to major organizations for security testing purposes.

"We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,"

Anthropic stated in its announcement via CNBC. The company said it experimented with efforts to "differentially reduce" the new model's cyber capabilities during training, encouraging security professionals interested in "legitimate cybersecurity purposes" to apply through a formal verification program instead.

This cautious approach contrasts with OpenAI's strategy announced just days after Anthropic unveiled Mythos earlier this month]. OpenAI is scaling its Trusted Access for Cyber program to thousands of verified defenders, offering them GPT-5.4-Cyber, a fine-tuned variant that relaxes usual guardrails for legitimate security work including binary reverse engineering capabilities.

While Opus 4.7 trails behind Mythos Preview on specialized cybersecurity tasks, it brings practical improvements for general enterprise use cases beyond just benchmark numbers]. The model processes images at more than three times the resolution of Opus 4.6, generates higher-quality interfaces and documents from visual inputs, and handles longer unsupervised tasks with improved self-verification before reporting results back to users.

A new xhigh effort level gives developers finer control over reasoning-latency tradeoffs on difficult problems, while task budgets in beta let Claude prioritize work and manage costs across extended runs, features particularly relevant for teams using AI for autonomous coding workflows where cost visibility matters.

The release arrives as Anthropic reports rapid growth momentum: Claude traffic has grown roughly fivefold over the past year, eight of the Fortune 10 companies are now customers according to OfficeChai], and the company secured $30 billion in funding at a $380 billion valuation in February.

For enterprise buyers who won't get access to Mythos-class capabilities anytime soon, Opus 4.7 represents what they actually get, a model that leads competitive benchmarks on most technical workloads while operating within carefully designed safety boundaries that reflect Anthropic's reputation as a firm more dedicated to responsible AI deployment than rivals.

Share