New measure exposes the labor impact of AI | Keryc
The research published by Anthropic on March 5, 2026 proposes a more direct way to measure when and where AI begins to displace real work. The central idea? It's not enough to know what a model can do in theory; you need to see what is actually used in workplace contexts and how it's automated.
Have you ever wondered when AI stops being a neat demo and starts changing jobs? This study aims to answer that by combining capabilities with real usage data.
What the study proposes
The authors create a measure called observed exposure that combines three elements: tasks described in O*NET, theoretical estimates of LLM capability (the β metric from Eloundou et al. 2023), and real professional usage data from the Anthropic Economic Index.
The intuition is simple: a task can be theoretically automatable by an LLM, but if no one performs it with AI in a work setting—or if it's only used as an aid (augmentative)—the effect on employment will be limited. That's why the measure gives more weight to automated uses and to activities that show up in API traffic or productive workflows.
How it's calculated, technically
β takes values {0, 0.5, 1} depending on whether an LLM alone, an LLM with tools, or neither can duplicate the speed of a task.
A task is declared "covered" if it appears frequently enough in Claude's workplace traffic in the Anthropic Economic Index.
Task-level coverages are aggregated to the occupation level using the share of time each task occupies in the job (the time measure in O*NET).
If you like formulas, the report's appendix has the mathematical details, but the practical idea is: measure not only capability but adoption and mode of use.
Main findings (technical summary)
The gap between theoretical capability and actual use is large. For example, in "Computer & Math" β suggests 94% of tasks are potentially affected, but Claude currently covers only 33%.
Occupations with higher observed exposure show somewhat weaker projected employment growth according to BLS projections. Result: every additional 10 percentage points in coverage is associated with a 0.6 percentage point drop in the 2024–2034 projected growth.
Worker profile in more exposed occupations: they're more likely to be older, women, more educated, and higher-paid. That breaks the stereotype that only low-paid jobs are at risk.
So far there is no systematic increase in unemployment among the most exposed since late 2022. However, there's suggestive evidence that hiring of young workers (22–25 years) in exposed occupations fell by about 14% in the post-ChatGPT era. That points to a drop in entry rates into those jobs rather than mass layoffs.
Methodology and counterfactuals: why clarity matters
The authors stress methodological prudence. If AI's effect were sudden and massive it would be easy to detect—like the pandemic. But if AI acts like the internet or foreign trade, its footprint can be gradual and get mixed with economic cycles or changing demand.
That's why they compare more- and less-exposed workers and apply a difference-in-differences approach focused on the top exposure quartile. They also test robustness by varying the threshold from the median up to the 95th percentile.
An interesting technical point: O-ring–type models suggest job loss may appear only if many tasks in an occupation are affected simultaneously, not because of small partial coverages. That complicates translating coverage into employment effects.
Early signals and detectable limits
With current data, the paper estimates that a differential rise in unemployment on the order of 1 percentage point would be detectable with their design. Extreme scenarios (e.g., 100% of the top 10% of workers losing their jobs) would be obvious; moderate scenarios require continuous monitoring.
The possible drop in hiring of young workers is an example of an effect that doesn't necessarily show up as an immediate increase in unemployment: young people may stay in current jobs, leave the labor force, or return to education, which complicates interpretation.
Robustness, limitations and next steps (technical)
Usage data come from Claude and the Anthropic Economic Index. That's valuable but implies platform bias. Adding more sources (other APIs, job listing sites) will strengthen the measure.
The β metric is based on LLM capabilities from early 2023. If capabilities change fast, β must be updated.
Important judgments remain: what counts as "significant use", how much to weight automation vs. augmentation, and how to assign rare but similar tasks. Still, rank–rank correlations of exposure under many variants are high, which gives confidence in the signal.
Useful future extensions: apply the methodology to other countries, follow cohorts of graduates in exposed fields, and link AI adoption to changes in labor supply by industry.
What does this tell you in practical terms?
If you work in programming, customer service, or data entry, the probability of seeing tasks automated is high. If you're responsible for policy or university training, the takeaway is: monitor hiring and entry routes for young people, and design programs that make reskilling easier.
There's no massive unemployment signal today, but there are hints of a reconfiguration in demand for young talent in exposed occupations. That's the kind of change you can anticipate if you measure well and act early.