From 423a02684a9c6242c1f2268a777e8c302e4f6458 Mon Sep 17 00:00:00 2001 From: harshikaalagh-netizen Date: Mon, 30 Mar 2026 15:11:20 +0530 Subject: [PATCH 1/7] Update articles/anthropic-data-retention-policy.mdx via admin --- .../anthropic-data-retention-policy.mdx | 50 +++++++++++-------- 1 file changed, 30 insertions(+), 20 deletions(-) diff --git a/apps/web/content/articles/anthropic-data-retention-policy.mdx b/apps/web/content/articles/anthropic-data-retention-policy.mdx index b9ec75c5cc..d6c6c1235e 100644 --- a/apps/web/content/articles/anthropic-data-retention-policy.mdx +++ b/apps/web/content/articles/anthropic-data-retention-policy.mdx @@ -1,11 +1,11 @@ --- -meta_title: "Anthropic Claude Data Retention Policy: What You Need to Know" +meta_title: "Anthropic Claude Data Retention Policy After September 2025" meta_description: "Anthropic was supposed to be the privacy-first AI company. Then September 2025 happened. A full breakdown of what Claude keeps and how to control it." author: - "Harshika" featured: false category: "Guides" -date: "2026-03-13" +date: "2026-03-30" --- Anthropic built its reputation as the privacy-conscious alternative to OpenAI. Constitutional AI, safety-first research, no training on customer data. For a while, that reputation was earned. @@ -16,27 +16,25 @@ If you're evaluating AI providers right now, this is the context you need. ## What Does Claude Store by Default? -For consumer accounts — Claude Free, Pro, and Max — conversations are saved to your account until you delete them. Once deleted, they're removed from your chat history immediately but remain on Anthropic's back-end systems for up to 30 days before being permanently deleted. +For consumer accounts, that are Claude Free, Pro, and Max, conversations are saved to your account until you delete them. Once deleted, they're removed from your chat history immediately but remain on Anthropic's back-end systems for up to 30 days before being permanently deleted. That's the standard case. A few exceptions matter: -**Policy violations.** If a conversation is flagged for violating Anthropic's usage policy, the inputs and outputs are kept for 2 years. Trust and safety classification scores from that conversation are retained for 7 years. +Policy violations. If a conversation is flagged for violating Anthropic's usage policy, the inputs and outputs are kept for 2 years. Trust and safety classification scores from that conversation are retained for 7 years. -**Feedback you submit.** Thumbs up/down ratings and bug reports are kept for 5 years. +Feedback you submit. Thumbs up/down ratings and bug reports are kept for 5 years. -**Incognito mode.** Conversations in Claude's incognito mode are never used for model training, regardless of your other settings. +Incognito mode. Conversations in Claude's incognito mode are never used for model training, regardless of your other settings. ## The September 2025 Training Policy Change -This is the part most people missed. - -Anthropic's previous stance was clean: consumer chats would not be used for training. That was the explicit promise. In August 2025, that changed. [According to Anthropic's own announcement](https://www.anthropic.com/news/updates-to-our-consumer-terms), they introduced an opt-in toggle — "You can help improve Claude" — and gave users until September 28 to make their choice. +Anthropic's previous stance was clean: consumer chats would not be used for training. That was the explicit promise. In August 2025, that changed. According to Anthropic's own announcement, they introduced an opt-in toggle, "You can help improve Claude", and gave users until September 28 to make their choice. If you opted in, Anthropic could retain your conversations in de-identified form for up to 5 years and use them for model training. If you opted out, nothing changed. There's still a 30-day retention, no training use. The reaction was immediate. Security researchers and privacy advocates flagged it as a "privacy pivot." The opt-in training setting extends data retention from 30 days to 5 years, a 60x increase in how long your conversations can sit in Anthropic's training pipeline. -## How to Check and Change Your Settings? +## How to Check and Change Your Settings To see where your account stands: Claude.ai → Settings → Privacy → "Improve Claude for everyone." @@ -44,11 +42,11 @@ If the toggle is on, your new conversations are eligible for training and retain Turning it off does not retroactively remove data already used for training. Like OpenAI, Anthropic doesn't unlearn from data once it's been incorporated. -## How Is the Anthropic API Different and Better for Privacy-conscious Users? +## How Is the Anthropic API Different for Privacy-Conscious Users? The consumer product and the API have meaningfully different data policies, and the API is notably stronger. -As of September 14, 2025, Anthropic reduced API log retention from 30 days to 7 days. API inputs and outputs are automatically deleted after 7 days. They are never used for model training — no opt-in, no opt-out, just a flat policy. +As of September 14, 2025, Anthropic reduced API log retention from 30 days to 7 days. API inputs and outputs are automatically deleted after 7 days. They are never used for model training. If your organization needs longer retention for auditing purposes, you can opt in to the 30-day window via your Data Processing Addendum. But the default is 7 days, which is stricter than most providers. @@ -62,22 +60,34 @@ Deleted conversations are purged within 30 days unless legally required otherwis ## What About HIPAA and GDPR? -**HIPAA:** Anthropic offers HIPAA-eligible services for qualifying healthcare customers, including a Business Associate Agreement. Under the BAA, certain features, including web search, are disabled. Standard consumer Claude is not HIPAA-compliant and should not be used with Protected Health Information. +HIPAA: Anthropic offers HIPAA-eligible services for qualifying healthcare customers, including a Business Associate Agreement. Under the BAA, certain features including web search are disabled. Standard consumer Claude is not HIPAA-compliant and should not be used with Protected Health Information. -**GDPR:** Anthropic supports GDPR compliance for commercial customers through a Data Processing Addendum. For EU-based consumer users, standard GDPR rights apply, including access, deletion, portability, and can be exercised through Anthropic's [Privacy Center](https://privacy.claude.com). Consumer accounts don't automatically come with a DPA. +GDPR: Anthropic supports GDPR compliance for commercial customers through a Data Processing Addendum. For EU-based consumer users, standard GDPR rights apply including access, deletion, portability, and can be exercised through Anthropic's Privacy Center. Consumer accounts don't automatically come with a DPA. -## The Reddit Lawsuit Worth Knowing About +## Claude's Reddit Lawsuit Worth Knowing About -In June 2025, Reddit [filed a lawsuit against Anthropic](https://ppc.land/reddit-files-lawsuit-against-anthropic-over-unauthorized-claude-ai-training/) alleging that Anthropic scraped more than 100,000 Reddit posts and comments without authorization to train Claude. Reddit presented evidence that Claude reproduced deleted Reddit posts with near-perfect accuracy. +In June 2025, Reddit filed a lawsuit against Anthropic alleging that Anthropic scraped more than 100,000 Reddit posts and comments without authorization to train Claude. Reddit presented evidence that Claude reproduced deleted Reddit posts with near-perfect accuracy. This isn't about your personal data directly. But it's relevant context for how Anthropic has approached training data acquisition, and it matters when evaluating whether a company's stated privacy values match their actual behavior. -## Using Claude's API Through Char +## How Claude.ai Compares to Using Claude Through Char + + +| | | | +| ---------------------------- | ---------------------------------- | ----------------------------------- | +| | **Claude.ai (Consumer)** | **Anthropic API via Char** | +| **Where notes are stored** | Anthropic's servers | Your device (plain markdown) | +| **Retention after deletion** | Up to 30 days on backend | 7 days (API), then deleted | +| **Used for training** | Yes — unless opted out (Sept 2025) | Never | +| **If opted into training** | Up to 5 years, de-identified | Not applicable | +| **HIPAA BAA available** | No (consumer) | Yes (enterprise API) | +| **Can switch AI providers** | No | Yes — OpenAI, Mistral, local models | + -[Char](https://char.com) is an open-source AI notepad for meetings that lets you bring your own Anthropic API key. When you connect it, your meeting data goes through the API i.e. 7-day retention, never used for training, rather than through the consumer Claude.ai product where training opt-ins and longer retention windows apply. +This is the distinction that matters if you're using AI for work conversations. When you use Claude.ai directly, your conversation is stored in Anthropic's database — subject to the training opt-in, retention windows, and policy changes that Anthropic controls. -And if Anthropic's policy trajectory gives you pause, you're not locked in. Char supports OpenAI, Mistral, Google Gemini, and local models via Ollama. Your notes stay on your device regardless of which AI processes them. Switching providers doesn't mean starting over. +When you bring your own Anthropic API key and use it through Char, the data goes through the API only. The 7-day retention window applies. Your notes come back as plain markdown files on your device. That's what actual control looks like. It's not just a privacy toggle that defaults to whatever Anthropic decides next quarter. -[Download Char for macOS](https://char.com/download/apple-silicon) and use the AI provider your security team actually trusts. \ No newline at end of file +Download Char for macOS and use the AI provider your security team actually trusts. \ No newline at end of file From 1925ec2c8aa758445ca3a1f6ae2647797a637600 Mon Sep 17 00:00:00 2001 From: harshikaalagh-netizen Date: Mon, 30 Mar 2026 15:13:31 +0530 Subject: [PATCH 2/7] Update articles/anthropic-data-retention-policy.mdx via admin From 8ed9a1b64cf73fb8571f146b0db0476b91cb1dfb Mon Sep 17 00:00:00 2001 From: harshikaalagh-netizen Date: Mon, 30 Mar 2026 15:17:35 +0530 Subject: [PATCH 3/7] Update articles/anthropic-data-retention-policy.mdx via admin --- apps/web/content/articles/anthropic-data-retention-policy.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/apps/web/content/articles/anthropic-data-retention-policy.mdx b/apps/web/content/articles/anthropic-data-retention-policy.mdx index d6c6c1235e..bdc98b6fba 100644 --- a/apps/web/content/articles/anthropic-data-retention-policy.mdx +++ b/apps/web/content/articles/anthropic-data-retention-policy.mdx @@ -28,7 +28,7 @@ Incognito mode. Conversations in Claude's incognito mode are never used for mode ## The September 2025 Training Policy Change -Anthropic's previous stance was clean: consumer chats would not be used for training. That was the explicit promise. In August 2025, that changed. According to Anthropic's own announcement, they introduced an opt-in toggle, "You can help improve Claude", and gave users until September 28 to make their choice. +Anthropic's previous stance was clean: consumer chats would not be used for training. That was the explicit promise. In August 2025, that changed. [According to Anthropic's own announcement](https://www.anthropic.com/news/updates-to-our-consumer-terms), they introduced an opt-in toggle, "You can help improve Claude", and gave users until September 28 to make their choice. If you opted in, Anthropic could retain your conversations in de-identified form for up to 5 years and use them for model training. If you opted out, nothing changed. There's still a 30-day retention, no training use. @@ -66,7 +66,7 @@ GDPR: Anthropic supports GDPR compliance for commercial customers through a Data ## Claude's Reddit Lawsuit Worth Knowing About -In June 2025, Reddit filed a lawsuit against Anthropic alleging that Anthropic scraped more than 100,000 Reddit posts and comments without authorization to train Claude. Reddit presented evidence that Claude reproduced deleted Reddit posts with near-perfect accuracy. +In June 2025, [Reddit filed a lawsuit against Anthropic](https://ppc.land/reddit-files-lawsuit-against-anthropic-over-unauthorized-claude-ai-training/) alleging that Anthropic scraped more than 100,000 Reddit posts and comments without authorization to train Claude. Reddit presented evidence that Claude reproduced deleted Reddit posts with near-perfect accuracy. This isn't about your personal data directly. But it's relevant context for how Anthropic has approached training data acquisition, and it matters when evaluating whether a company's stated privacy values match their actual behavior. From 5594b823d2a9694951da0880b89f67b46ff55ab5 Mon Sep 17 00:00:00 2001 From: harshikaalagh-netizen Date: Mon, 30 Mar 2026 15:18:05 +0530 Subject: [PATCH 4/7] Update articles/anthropic-data-retention-policy.mdx via admin --- .../content/articles/anthropic-data-retention-policy.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/apps/web/content/articles/anthropic-data-retention-policy.mdx b/apps/web/content/articles/anthropic-data-retention-policy.mdx index bdc98b6fba..2ba708b445 100644 --- a/apps/web/content/articles/anthropic-data-retention-policy.mdx +++ b/apps/web/content/articles/anthropic-data-retention-policy.mdx @@ -84,10 +84,10 @@ This isn't about your personal data directly. But it's relevant context for how | **Can switch AI providers** | No | Yes — OpenAI, Mistral, local models | -This is the distinction that matters if you're using AI for work conversations. When you use Claude.ai directly, your conversation is stored in Anthropic's database — subject to the training opt-in, retention windows, and policy changes that Anthropic controls. +[Char](https://char.com/) is an open-source AI notepad for meetings that lets you bring your own Anthropic API key. When you connect it, your meeting data goes through the API i.e. 7-day retention, never used for training, rather than through the consumer Claude.ai product where training opt-ins and longer retention windows apply. -When you bring your own Anthropic API key and use it through Char, the data goes through the API only. The 7-day retention window applies. Your notes come back as plain markdown files on your device. +And if Anthropic's policy trajectory gives you pause, you're not locked in. Char supports OpenAI, Mistral, Google Gemini, and local models via Ollama. Your notes stay on your device regardless of which AI processes them. Switching providers doesn't mean starting over. That's what actual control looks like. It's not just a privacy toggle that defaults to whatever Anthropic decides next quarter. -Download Char for macOS and use the AI provider your security team actually trusts. \ No newline at end of file +[Download Char for macOS](https://char.com/download/apple-silicon) and use the AI provider your security team actually trusts. \ No newline at end of file From 1a110b4b9648f39c3515f988f82ed4993659e28e Mon Sep 17 00:00:00 2001 From: harshikaalagh-netizen Date: Mon, 30 Mar 2026 15:18:41 +0530 Subject: [PATCH 5/7] Update articles/anthropic-data-retention-policy.mdx via admin From f08c5b373d159e856036296aa690815fcf79b7a7 Mon Sep 17 00:00:00 2001 From: harshikaalagh-netizen Date: Mon, 30 Mar 2026 15:20:19 +0530 Subject: [PATCH 6/7] Update articles/anthropic-data-retention-policy.mdx via admin --- apps/web/content/articles/anthropic-data-retention-policy.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/apps/web/content/articles/anthropic-data-retention-policy.mdx b/apps/web/content/articles/anthropic-data-retention-policy.mdx index 2ba708b445..222210f582 100644 --- a/apps/web/content/articles/anthropic-data-retention-policy.mdx +++ b/apps/web/content/articles/anthropic-data-retention-policy.mdx @@ -1,5 +1,5 @@ --- -meta_title: "Anthropic Claude Data Retention Policy After September 2025" +meta_title: "Anthropic Claude Data Retention Policy 2026" meta_description: "Anthropic was supposed to be the privacy-first AI company. Then September 2025 happened. A full breakdown of what Claude keeps and how to control it." author: - "Harshika" From 17bd997c449c225c075a1fdfeb221cfb9d139436 Mon Sep 17 00:00:00 2001 From: harshikaalagh-netizen Date: Mon, 30 Mar 2026 15:21:07 +0530 Subject: [PATCH 7/7] Update articles/anthropic-data-retention-policy.mdx via admin