Anthropic published an update today to its Usage Policy
to clarify how its products should be used and what’s prohibited. Why does it matter if you use Claude, its agents, or any AI system that performs automated tasks? Because these rules change how companies, developers, and product teams should design controls, transparency, and human oversight. (anthropic.com)
What changes and why it matters
The update aims to add clarity after receiving user feedback, product changes, and regulatory developments. The modifications take effect on September 15, 2025, so you have time to adjust processes and contracts. (anthropic.com)
In practical terms, this isn’t just wording: it aims to specify real risks we already see day to day, like agents performing actions on external services or models generating instructions for technical tasks.
Addressing cybersecurity and agentic use
Anthropic recognizes the fast progress of agentic capabilities and adds sections that prohibit activities related to compromising computers, networks, or infrastructure. At the same time, it clarifies support for legitimate security uses—like finding vulnerabilities with the system owner’s consent. The goal is to balance innovation and risk. (anthropic.com)
Think of a team in Caracas testing an agent to deploy updates: they now must evaluate controls to prevent that agent from being manipulated to run malicious code in production.
More precise restrictions on political content
Previously the policy had a broad ban on campaign and lobbying content. Anthropic replaces that with a more targeted rule: it prohibits deceptive or disruptive activities affecting democratic processes and using AI for voter microtargeting, but allows legitimate political research, civic education, and responsible political writing. This opens useful space for researchers and NGOs while keeping guardrails against manipulation. (anthropic.com)
A practical example: an NGO using Claude to explain legal changes to citizens is likely allowed; using it to create targeted and deceptive campaigns is explicitly banned.
Clarity on use by law enforcement
The update simplifies the language around police and security applications. According to Anthropic, it doesn’t change what’s allowed or forbidden, but it makes it easier to understand: it bans invasive surveillance, tracking, profiling, and biometric monitoring, while keeping certain analytical and administrative uses permitted as long as they don’t cross those lines. (anthropic.com)
If you work with governments or security vendors, review how your use case aligns with these definitions; the clearer wording helps avoid contractual misunderstandings.
Requirements for high-risk consumer-facing cases
For uses that impact public welfare and social equity (for example in health, legal, finance, or employment), the policy requires extra safeguards: human oversight, disclosure of AI use, and technical measures to reduce bias and errors. These requirements apply when model outputs go directly to consumers, not for internal B2B interactions. (anthropic.com)
That means if a Venezuelan fintech uses Claude for decisions that affect people’s access to credit, it will need to implement human-in-the-loop
and clear transparency for users.
What you can do now (practical checklist)
- Review your current use cases and mark which are consumer-facing.
- Implement human oversight where decisions affect people’s lives.
- Update terms and user notices to disclose AI use.
- Strengthen security controls for agents that perform external actions.
- Test and audit outputs in sensitive scenarios (legal, employment, finance).
Small changes now prevent sanctions, reputation problems, or future prohibited activity.
The policy is a living document. Anthropic adjusts it with the pulse of risk and real adoption, so it’s worth reviewing periodically. (anthropic.com)
Final note
This update isn’t just fine print: it forces product owners, engineers, and leaders to think about operational security, transparency, and human safeguards from the design phase. Is it your turn? If your project uses agents or delivers results directly to users, now is a good time to map risks and update processes before September 15, 2025.