Anthropic updates terms: option to use data in AI

3 minutes
ANTHROPIC
Anthropic updates terms: option to use data in AI

Anthropic announced an update to its Consumer Terms and Privacy Policy to give users the option to allow their data to be used to improve AI models. Why should this matter to you if you use Claude for everyday questions or to debug code? Let's break it down, no unnecessary technicalities. (anthropic.com)

What changed

The main news is that users can now choose whether their chats and code sessions are used to train and improve Claude. This option applies to Free, Pro, and Max accounts, including when you use Claude Code from those accounts. It does not affect services under the Commercial Terms, like Claude for Work, Claude Gov, Claude for Education, nor uses via API or third-party integrations. (anthropic.com)

If you're an existing user, Anthropic will show a notice inside the app so you can make a choice. You have until September 28, 2025 to accept the updated terms and set your preference; if you don't decide before that date, you'll need to choose to keep using Claude. (anthropic.com)

Data retention: how long and what happens if you choose

If you agree to let your data be used for model training, Anthropic will extend data retention to five years for new or resumed conversations and code sessions. If you don't authorize this use, your data retention will stay at the current 30 days. This also applies to feedback you send about Claude's responses. (anthropic.com)

Important: deleting a conversation prevents that conversation from being used in future training, and changing your setting will stop new chats from being used in future training. However, the company clarifies that data already incorporated into trainings in progress or into models already trained won't be removed retroactively. (anthropic.com)

Why they're doing it and what benefit it promises

Anthropic says models learn from real interactions. Think about it: when you use Claude to debug a code snippet, that exchange can teach the model how to solve similar bugs later. That’s a concrete, everyday example of why live data helps.

They also argue that keeping data longer helps improve classifiers that detect abuse, spam, and fraud, which should make the platform safer for everyone. In short: more data can mean better troubleshooting and stronger protections. (anthropic.com)

What you can do now (practical steps)

  • If you're new, you'll see the option during sign-up and can choose when you create the account. (anthropic.com)
  • If you already use Claude, you'll get an in-app notice; you can accept now or postpone the decision until September 28, 2025. (anthropic.com)
  • To change your choice anytime, go to Privacy settings in Claude. (anthropic.com)

Frequently asked questions and reasonable concerns

Will they sell your data to third parties? Anthropic says they do not sell user data. They also use automated processes to filter or obfuscate sensitive information. If privacy worries you, you can opt out of sharing data for training and continue using Claude with the 30-day retention. (anthropic.com)

What if you change your mind later? You can update your choice anytime; they'll stop using new chats for training from the moment you change the option. Keep in mind though, data already processed may remain part of existing models. (anthropic.com)

Final reflection

This update surfaces the classic trade-off in AI: more data for better models, or less data for greater privacy. Which side makes more sense for you? If you create software or content, sharing interactions can help make future models more useful. If you handle sensitive info, opting out is simple and reversible.

In the end, the best choice depends on what you value today: contributing to training so you get better answers tomorrow, or keeping a shorter lifecycle for your conversations. Either way, now you know where the switch is and the basic effects of flipping it. For full legal text and more detail, check Anthropic's official note and the FAQ in their announcement. (anthropic.com)

Stay up to date!

Receive practical guides, fact-checks and AI analysis straight to your inbox, no technical jargon or fluff.

Your data is safe. Unsubscribing is easy at any time.