Anthropic announced the acquisition of Vercept to improve Claude's ability to use applications as a person would at a keyboard. Does that sound like science fiction to you? It's actually a practical evolution: letting a model perform complex tasks inside live applications—navigating windows, forms, and files—changes how real workflows get automated.
What Anthropic announced
The company acquired Vercept, a team specialized in the challenges of getting AI to see and interact inside the same software you use every day. Cofounders Kiana Ehsani, Luca Weihs and Ross Girshick are joining Anthropic, and Vercept’s external product will be shut down in the coming weeks while the team focuses on building Claude’s internal capabilities.
Why is this relevant now? Because people already use Claude for complex tasks: writing and running code in repositories, summarizing research from many sources, and coordinating tools and teams. Integrating Vercept aims to let Claude do all that directly inside live applications.
What "computer use" means for you
When Anthropic talks about "computer use" they mean Claude acting inside applications like a person would: filling forms, manipulating complex spreadsheets, switching flows across tabs, and executing chained steps. It's not just generating code or instructions—it's operating in the real environment where the tasks actually happen.
This opens possibilities to solve problems that more code alone doesn't fix: integrating tools, detecting interfaces, and taking contextual actions.
What Vercept brings
Vercept focused on two hard problems: perception (helping the AI understand the interface) and interaction (helping the AI act precisely in that interface). That expertise is exactly what Anthropic needs to push Claude to the next level, especially for multi-step tasks inside real applications.
Anthropic also notes Vercept isn't the first team they've added: they previously integrated Bun. They look for groups with technical ambition and a shared focus on safety and rigor.
Indicators of progress: Sonnet 4.6 and OSWorld
Anthropic has already released Sonnet 4.6, a version that shows important improvements in these skills. In the OSWorld evaluation, used to measure "computer use", Sonnet models moved from under 15% at the end of 2024 to 72.5% today. That means they're approaching human-level performance on tasks like navigating complex spreadsheets and filling web forms across tabs.
These numbers aren't empty promises: they show real progress on practical tasks that affect productivity and automation.
What changes for users, companies and developers
For users: more automated actions inside your favorite apps, from filling reports to reviewing data pipelines.
For companies: the possibility to integrate assistants that don't just suggest steps but execute them safely, reducing friction in long processes.
For developers: new opportunities to design experiences where AI interacts with interfaces; and new challenges in security and human review.
Anthropic complements this with new code security features, like Claude Code Security, which looks for vulnerabilities in codebases and suggests patches for human review.
A practical leap, not just theoretical
Is this the end of human work? Not at all. It's a tool that can take over repetitive or highly integrated tasks across apps, leaving people focused on creative, high-impact decisions. It also forces us to think about controls, review, and responsible design from the start.
If you're interested in working on this from the inside, Anthropic invites engineers and specialists to check their careers page.