Anthropic has rolled out a major shift in its privacy policy. Starting now, all Claude users must explicitly decide whether to allow their chat and coding sessions to be used for AI training, or opt out before September 28, 2025. Failure to act counts as consent.

What’s changing
Previously, Anthropic deleted consumer data within 30 days unless flagged or legally required to retain longer. Now, non-opt-out users will see their conversations and code used in model training, with retention stretched to five years.
This update spans all consumer tiers like Claude Free, Pro, Max, and Claude Code. Business and enterprise products like Claude for Work, Education, Gov, and API use remain untouched.
The fine print users see
Existing users encounter a prompt labelled “Updates to Consumer Terms and Policies,” featuring a bold Accept button and a small toggle for data-sharing, set to “On” by default. Skipping or ignoring the prompt means data can be used unless opted out.
New users must make this choice during sign-up. Users can later change their preference in Privacy Settings, but changes only apply to future chats. Once data has been used for training, it can’t be recalled.
Why it matters
Anthropic frames the decision as user-powered safety, greater data means better moderation, coding, and comprehension. But the move also reflects a race for real-world datasets, with Anthropic needing user content to scale and compete in AI.
This marks a striking privacy shift, from an opt-in ethos to opt-out defaults, raising concerns about meaningful consent and user awareness.
Quick takeaways
Change | Impact |
Opt-out by default | Users must act to stop data sharing, silence means assent. |
Data retention extended | Data now lasts up to five years if shared, far longer than before. |
Choice is reversible—but limited | Future conversations can be opted out, but past use stays. |
Anthropic’s update demands user attention, not just from a privacy standpoint but also for its broader implications on how consent is managed in AI services. Let me know if you’d like a deeper dive into other AI players’ privacy shifts or planning a user-first breakdown.