Anthropic launches Claude Opus 4.7 with better code and vision | Keryc
Claude Opus 4.7 is now available and it comes to solve precisely those jobs you were most afraid to delegate: long, complex coding tasks, careful technical analysis, and detailed reading of high-resolution images. The result? More confidence to let the AI take on the hardest parts of a project without constant supervision.
What's new in Opus 4.7
Opus 4.7 shows up as a more rigorous and consistent version than Opus 4.6 for advanced software engineering tasks. Here are the most relevant changes, explained without unnecessary jargon:
Better execution on long, multi-step tasks: think pipelines, automations and reviews that used to stall or need frequent human correction.
More attention to instructions: it follows requests more literally, so you may need to adjust prompts that used to work by approximation.
Higher-resolution vision: accepts images up to 2,576 pixels on the long side, which opens use cases for complex diagrams, dense screenshots and work that needs fine visual detail.
New effort levels: xhigh appears between high and max to better control the depth of reasoning versus latency.
Improved file-based memory: remembers notes across long sessions and uses them to continue work without so much startup context.
Performance and use cases
Early testers report practical jumps, not just score improvements: Opus 4.7 fixes more complex bugs, performs code reviews with higher recall, and produces interfaces and presentations with a more professional polish. Want concrete examples?
Finance teams saw more rigorous analyses and cleaner presentations.
Code-review platforms caught more hard-to-find bugs without losing precision.
Life-sciences research tools can read chemical structures and diagrams with more detail thanks to the improved resolution.
In internal benchmarks, Opus 4.7 outperforms Opus 4.6 on several key metrics: solving coding tasks, consistency in long contexts, and quality when calling tools. For you, that means fewer interruptions and more work completed in a single session.
Security, Project Glasswing and the Cyber Verification Program
Anthropic clarified that Opus 4.7 does not have the advanced cyber capabilities of their Mythos Preview model. During training they applied efforts to differentially reduce sensitive capabilities, and launching Opus 4.7 activates safeguards that detect and block high-risk uses in cybersecurity.
If you are a security professional and need to use Opus 4.7 for legitimate research (pentesting, red-teaming, vulnerability work), you can apply to the new Cyber Verification Program to get access and support.
Opus 4.7 serves as a testbed for new protections before rolling out more advanced models more broadly.
Availability and pricing
Opus 4.7 is available today across all Claude products and via API, as well as through Amazon Bedrock, Google Cloud Vertex AI and Microsoft Foundry. Pricing remains the same as Opus 4.6: 5 USD per million input tokens and 25 USD per million output tokens. In the API you can invoke the model as claude-opus-4-7.
What's new on the platform and in Claude Code
Task budget control in beta, to manage how many tokens you spend on long processes.
In Claude Code, the /ultrareview command creates a dedicated review session that hunts for bugs and design issues; Pro and Max users get three ultrareviews free to try.
Extended auto mode for Max users to allow longer runs with fewer interruptions and permission checks.
Safety and alignment
Overall, Opus 4.7 keeps a security profile similar to Opus 4.6, with improvements in honesty and resistance to prompt injection, although it shows small regressions on some harm-mitigation metrics. Anthropic describes the model as largely aligned and reliable, though not perfect. Mythos Preview remains, they say, their best-aligned model.
What should you consider before migrating?
Two practical changes that affect token consumption:
New tokenization: the same text can map to more tokens, roughly between 1.0 and 1.35x depending on the content type.
More thinking at higher effort levels: at high or xhigh the model produces more output to deliver reliable results on hard problems.
Quick tips:
Measure impact with real traffic before migrating to production.
Adjust prompts for Opus 4.7's greater literalness.
Use task budgets or ask for more concise answers if you want to control tokens.
Anthropic published a migration guide with more practical details.
Final thoughts
Claude Opus 4.7 is a clear bet on reliability for complex tasks and on improving multimodal vision. If you work with serious code, long reviews, financial analysis or data interfaces, it's worth trying and measuring the difference. Ready to let AI carry the hardest parts of the work and give you back time to think about the product?