The state of Maryland announced a partnership with Anthropic to deploy the AI models Claude across several state agencies and improve services for more than six million residents.
Can you imagine an assistant that guides a family step by step to apply for benefits, or that helps a social worker process piles of documents faster? That’s exactly what this partnership proposes.
What Maryland announced
Maryland will integrate Claude across multiple areas of government with three clear priorities:
Connect families with benefits: a virtual assistant will help apply to programs like SNAP, Medicaid, temporary cash assistance and WIC, and suggest other supports a family may be eligible for.
Support the work of caseworkers: the state manually processes more than 150,000 documents a month for assistance programs. Claude will help verify documents, validate eligibility and offer guidance on policies in complex cases.
Upskilling and community services: a pilot will explore AI training for early-career professionals in so-called “lighthouse” industries (strategic sectors). Also, a tool will be developed to identify unmet local needs—like access to fresh food and childcare—and deliver data and resources to community leaders.
Why does this matter for everyday people?
Because this isn’t technology for technology’s sake: it’s about removing real friction. If a digital assistant can simplify a benefits application or spot extra programs for a family, the impact shows up directly in wellbeing and time saved.
For workers, less paperwork and faster answers can mean fewer mistakes and more time for complex human support. How much is that worth in effective social care?
What they've already tried and who they're working with
Maryland has already used Claude since June: it launched a bilingual chatbot that made information more accessible to over 600,000 people receiving SUN Bucks and eased the load on call centers.
Anthropic worked with Governor Wes Moore, the Rockefeller Foundation and Percepta to design programs tailored to the state’s needs, aiming for this to serve as a model for other states.
Safety, governance, and remaining questions
Anthropic highlights that Claude undergoes rigorous safety testing and that its approach fits sensitive government applications. Still, there are practical questions that matter:
Privacy: how are personal data protected and what are the rules for retention and access?
Transparency: how will users know when they’re interacting with an AI and what decisions the tool automates?
Human control: who makes the final decision in complex cases and how are those decisions audited?
Equity: what measures are in place to prevent biases that could affect access to benefits?
These aren’t minor doubts: for technology to improve public services you need clear governance, auditing and citizen participation.
What we can expect and how it can serve as a model
If the rollout keeps strong controls and focuses on real impacts, Maryland could show a reproducible path: assistants that simplify procedures, reduced administrative burden and local diagnostic tools to prioritize resources.
It’s not a substitute for human work, but a tool to amplify it. The difference lies in how the experience is designed and how people’s rights are protected.