Claude accelerates scientific discoveries with AI | Keryc
For a little over a year Claude has been transforming from a text assistant into a real lab collaborator. Can you imagine a tool that helps you decide which experiments to run, analyzes mountains of data, and even proposes hypotheses humans might miss? That’s exactly what several research teams are reporting.
Claude as a scientific collaborator
Anthropic launched Claude for Life Sciences and then improved the model with Opus 4.5, which shows advances in figure interpretation, computational biology, and protein understanding. But the important news isn’t just the model’s power: it’s how scientists are using it to speed up work that used to take months.
The company also supports projects through the AI for Science program, which gives API credits to researchers with high-impact projects. With that support, academic and industry teams have built systems that integrate Claude into every stage of the scientific process, from defining experiments to analyzing large-scale results.
Biomni: bringing hundreds of tools into one place
A classic problem in biology is fragmented tools: databases, software packages, and mismatched formats that eat up time. Biomni, from Stanford, gathers hundreds of resources and lets an agent powered by Claude navigate them with natural language requests.
For example, a GWAS study that usually requires data cleaning, confounder control, and reconciliation with biological annotations can take months. In an early test, Biomni completed that analysis in 20 minutes. Yes, it sounds surprising; that’s why teams validated the system with blind studies: it designed a molecular cloning experiment with results comparable to a postdoc with five years of experience, and it analyzed hundreds of sensor files in minutes instead of weeks.
Biomni isn’t infallible: it includes guardrails to detect when Claude goes off track and lets experts codify their methodology as a "skill" to teach the agent how things should be done in clinical or diagnostic contexts.
MozzareLLM and Brieflow: automating interpretation of genetic screens
Iain Cheeseman’s lab uses CRISPR to generate massive screens: they knock out genes and photograph cells to see what changes. Image processing and grouping genes into clusters was handled by the Brieflow software, but interpreting those clusters took hours of reading and expert judgment.
Matteo Di Bernardo created MozzareLLM, a system with Claude that mimics Cheeseman’s interpretation process: it identifies shared biological processes within a cluster, distinguishes well-known genes from understudied ones, and assigns confidence levels. The result: much faster analysis and discoveries the human team had overlooked.
A detail I like: MozzareLLM doesn’t just give an answer — it explains how confident it is. That transparency helps you decide whether it’s worth investing resources in experimental validation.
Lundberg Lab: proposing which genes to study before spending on the screen
Some labs hit the bottleneck earlier: choosing which genes to include in a focused screen that can cost over $20,000. The traditional method is a spreadsheet with human suggestions — it works, but it’s slow and biased toward what’s already known.
Lundberg’s approach was different: they built a map of all cellular molecules and their relationships, and asked Claude to propose candidates to study based on molecular properties. They’re testing the approach with genes related to primary cilia, little-studied cellular structures. If Claude identifies more real genes than the human group or does it much faster, this could change how targets are chosen for focused screens.
What does this mean for science — and for you?
We’re not looking at a magic bullet. These systems don’t replace scientists, but they do remove bottlenecks: they automate repetitive tasks, propose hypotheses, and let experts focus on validating and designing creative experiments.
There are clear limits: models can be wrong, results need experimental validation, and it’s necessary to integrate safeguards and expert methods when context demands it. Several teams show that combining AI with expert knowledge — teaching concrete procedures to the agent — is an effective practice.
Looking ahead
The trend is clear: each new model version brings improvements that expand the tasks AI can support. What a few years ago was paper summarization and coding help now contributes to experimental design, large-scale analysis, and generation of research candidates.
So what now? The practical answer is simple: labs that adopt these tools with rigorous criteria, validation measures, and collaboration between AI and experts will have an advantage. For the rest of us, this means faster research and, potentially, scientific discoveries that impact health, biotech, and basic knowledge sooner.