This Anthropic report, published on March 24, 2026, looks at how Claude's use changed in February 2026 and, crucially, what that tells us about users' learning curves. Why does this matter to you? Because understanding who learns to use AI and how they do it is key to measuring its economic and labor impacts.
What the Economic Index measures and how they worked
Anthropic uses a privacy-preserving data analysis system to observe aggregate user behavior. The main sample covers 1 million conversations taken between February 5 and 12, 2026 — a few weeks after the launch of Claude Opus 4.5 and overlapping with Opus 4.6.
- Two surfaces are compared:
Claude.ai(the web product) and Anthropic's first-party API, the1P API(programmatic flow). - Tasks are assigned to occupational categories using
O*NETcodes and a task's "value" is estimated with the average hourly wage of the workers who would do that task (BLS 2024 data). - The authors run log-level regressions and task fixed effects to analyze experience and model selection effects.
This sounds technical, but it's useful: the approach lets you see not just what the AI does, but who is using it and with what outcomes — while keeping individual content confidential.
Key changes since the previous report
The headline results show subtle but persistent shifts:
-
Diversification of uses in
Claude.ai: concentration fell — the top 10 tasks went from representing 24% to 19% of traffic. That means a wider variety of questions and use cases on the web platform. -
Migration of code to the API: programming tasks are moving to the
1P API, where Claude Code breaks work into smaller, automated calls. In the API, task concentration stayed flat, but the share of Computing and Math tasks grew. -
Average task value on
Claude.aidrops slightly: estimated average hourly task value falls from $49.3 to $47.9, driven by more simple personal queries (sports, product comparisons, home maintenance) and fewer programming queries on the web. -
Geography: within the United States per-capita convergence continues but slows (estimates moved from 2–5 years to 5–9 years to equalize). Internationally, adoption concentrates more: the top 20 countries by per-capita use rise from 45% to 48% of total.
Learning curves: what they found about more experienced users
Now for the most interesting part if you're thinking about skills and jobs. Curious how experience changes outcomes?
-
Operational definition: "high tenure" = users registered at least 6 months earlier; "low tenure" = everyone else.
-
Different behavior by high-tenure users:
- They use Claude more for work and higher-education-complexity tasks.
- They have 10% fewer personal conversations and roughly 6% higher education level in the inputs they send.
- Their task distribution is less concentrated; their task mix is more diverse.
-
Success in conversations:
- In raw measures, higher-tenure users show about a 10% higher conversation success rate (an internal Claude metric).
- In controlled regressions (comparing within the same task and controlling for model, language, country, and other variables) the effect shrinks but persists: roughly a 3–5 percentage point higher probability of success for high-tenure users. That suggests it's not only that they bring better tasks; there's consistent evidence of learning-by-using.
More experienced users don't just bring more technical work; they learn to use the tool better and get better answers.
Model selection: when people choose Opus
Anthropic offers model classes: Haiku, Sonnet, and Opus. Opus is the most capable but also the most expensive per token.
-
Users appear to calibrate: tasks with higher associated wages use
Opusmore often. OnClaude.ai, for every additional $10 in a task's hourly wage,Opususe rises by about 1.5 percentage points. On the1P APIthe response is larger: about 2.8 percentage points per $10. -
Concrete examples: 55% of Computing and Math tasks on paid accounts use
Opus, versus 45% for educational tasks; 34% of software developer tasks useOpuson the web vs 12% for tutoring.
This matters because users are managing cost-speed-performance tradeoffs, and those automating via API shift models more according to complexity.
Emerging automation patterns and labor risks
-
The
1P APIshows more directive flows and less need for human intervention: automation in customer support, B2B sales generation, data enrichment, and even trading flows and market operations support grew notably. -
Implication: when a task migrates to the API and becomes structured, it’s easier to integrate into automated processes, which accelerates the potential for substitution or transformation of the associated work.
-
Inequality and adoption bias: complementary skills matter. If early users are technical and learn to use AI better, they can capture productivity gains that don't automatically flow to less familiar workers. That can reinforce labor inequalities.
Methods and limitations (brief and practical)
-
Privacy: the authors use aggregation and differential privacy in measuring conversations, avoiding disclosure of individual content.
-
Samples and biases: there is survivor and cohort bias. High-tenure users may be technical by initial selection. The authors try to control with task fixed effects and granular regressions, but some confounding may remain.
-
Time horizon: the sample is short (one week in February) and coincides with marketing events (Super Bowl advertising), which can attract many new users and change the usage mix.
What this means for you and public policy
-
For professionals and entrepreneurs: learning to use more capable models and choosing well between
Opus,Sonnet, andHaikupays off. Hands-on experience improves success rates. -
For companies: moving tasks to the API enables automation and scale, but requires orchestration design (for example, splitting tasks into calls, validation oracles, handling latency and cost).
-
For regulators and policy makers: watching initial adoption isn't enough. There's a learning component that determines who benefits. Training programs, measurements of access inequality, and policies that monitor sectoral automation will be key.
Using AI isn't a magic switch: it's a skill you practice. Anthropic's early signals show that users who invest time learning get better results.
Final reflection
Anthropic's numbers don't overturn the broader story: AI first spreads among users with more resources and higher-value tasks. But they add important nuance: experience matters, model choice matters, and migration to the API points to a second wave more prone to automation. The lesson? Want to capitalize on AI? Practice — learning-by-using looks like one of the clearest channels to turn technological promise into real results.
Original source
https://www.anthropic.com/research/economic-index-march-2026-report
