Anthropic pushes Claude Opus 4.7 into GA with sharper coding

Original: Introducing Claude Opus 4.7 View original →

Read in other languages: 한국어日本語
LLM Apr 16, 2026 By Insights AI 2 min read 4 views Source

Claude Opus 4.7 is not just a routine model bump. In its April 16 release note, Anthropic put the model into general availability across Claude products, the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Pricing stays at the Opus 4.6 level: $5 per million input tokens and $25 per million output tokens.

The clearest target is hard coding and long-running agent work. Anthropic says Opus 4.7 improves on Opus 4.6 in advanced software engineering, handles complex multi-step tasks with more consistency, and follows instructions more literally. That last point matters for production teams: prompts that worked by relying on an older model to infer or skip details may need re-testing, because the new model may obey wording that previous versions treated loosely.

The multimodal shift is also concrete. Opus 4.7 can process images up to 2,576 pixels on the long edge, about 3.75 megapixels, which Anthropic says is more than three times as much visual detail as prior Claude models. That changes the ceiling for computer-use agents reading dense screenshots, workflows that extract data from detailed diagrams, and interface work where pixel-level references affect the answer.

Anthropic also points to stronger real-world work results, including state-of-the-art claims on Finance Agent and GDPval-AA, and better use of file system-based memory across long, multi-session tasks. The surrounding product updates make the release more than a model card. Opus 4.7 adds an xhigh effort level between high and max, the API gets task budgets in public beta, and Claude Code adds /ultrareview for deeper review sessions that look for bugs and design issues.

The cyber posture is part of the story. Anthropic says Opus 4.7 is the first model where it is testing new safeguards before any broad release of Mythos-class cyber capabilities. The model is meant to detect and block prohibited or high-risk cybersecurity requests, while legitimate vulnerability research, penetration testing, and red-teaming are routed through a Cyber Verification Program. The release therefore raises the bar for coding agents while also testing how Anthropic wants to gate stronger cyber uses.

Share: Long

Related Articles

LLM sources.twitter 6d ago 2 min read

Claude said on April 8, 2026 that Managed Agents lets teams define tasks, tools, and guardrails while Anthropic runs the agent infrastructure. Anthropic's official materials describe a composable API suite for cloud-hosted, versioned agents, with advanced capabilities like outcomes, memory, and multi-agent orchestration in limited research preview.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.