Anthropic has announced a new monthly survey designed to capture how people are living through — and expecting — the economic changes driven by artificial intelligence. Why does that matter to you? Because it’s about real experiences: what tasks people hand off to AI, whether they feel more productive, and how jobs and roles are shifting.
The initiative uses Anthropic Interviewer to collect accounts from Claude users and aims to spot changes in near real time that traditional metrics haven’t yet revealed.
What Anthropic Announces
Anthropic’s Economic Research team is launching the Anthropic Economic Index Survey, a monthly survey that starts today. Each month they invite a small, random group of Claude users who have had personal accounts for at least two weeks.
You might see the invitation as a banner on claude.ai, in the Cowork desktop app, or by email if you mainly use Claude on mobile. The sample rotates monthly to broaden the diversity of voices over time.
What they want to measure and why it matters
Quantitative data like model usage and traffic, and traditional labor indicators (employment, wages, layoffs) are critical — but they have limits. Do they tell you how people actually experience the change? Which tasks they hand to AI, or whether they feel productivity gains? Not always.
This survey adds a continuous qualitative layer: open questions about current work, one-year expectations, and ten-year visions. That helps capture early signals before they show up in macroeconomic aggregates.
The novelty is the monthly cadence: it helps see not just what people think, but how quickly those opinions change as AI capabilities evolve.
Methodology, questions and analysis (technical)
Main questions: which tasks are being transferred to AI, where people observe productivity increases, notes on hiring and roles, future expectations, and wishes for a well-managed transition.
Probable flow of technical analysis (summarized):
Collection with Anthropic Interviewer and storage according to the privacy policy.
Text preprocessing: tokenization, normalization and filtering of PII if applicable.
Representation extraction: embeddings for clustering, semantic search and topic detection.
Topic modeling: topic modeling or modern techniques like BERTopic to map emerging themes.
Temporal analysis: time series and changepoint detection to see when perceptions shift by cohort.
Quantitative linkage to usage: correlate qualitative signals with aggregate Claude usage metrics in a privacy-preserving way.
Control and correction techniques: weighting to adjust representativeness, robustness checks against self-selection bias, and cross-validation with administrative data and usage metrics.
Privacy, publication and limits
Anthropic will process data according to its Supplemental Privacy Policy. They may include deidentified responses in publications if users opt in.
Limitations to keep in mind:
Sample bias: the survey reaches only Claude users and those who agree to participate, which can skew perceptions toward more tech-savvy groups.
Self-reporting: what people remember or want to report doesn't always match objective measurements.
Timing: while monthly cadence reduces latency, some economic effects take longer to materialize.
Technical measures to mitigate risks include sample rotation, weighting strategies, deidentification and the use of aggregates or differential privacy techniques if applied when sharing results.
Practical implications for researchers, companies and regulators
Researchers: a continuous source of qualitative data that can be combined with metrics for longitudinal studies on adoption and professional practice.
Companies: early insights into how AI changes roles and output; useful for product design and retraining plans.
Regulators and policymakers: signals on how people perceive risks and opportunities, which can guide labor and education interventions.
If you’re invited to participate, it’s a good chance to share direct experiences that enrich public and technical understanding of AI’s impact.
The first findings will be complemented by the report that already analyzed 81,000 open responses collected in December, which helps put the new monthly series in context.
Final reflection
This survey aims to bridge cold statistics and human experience. In a period where AI is rapidly changing tasks and expectations, listening to people with cadence and rigor can help anticipate trends and design better public and private responses.