Your AI Knows More Than You Think: A Privacy Comparison of Claude, ChatGPT, and Gemini
Technology

Your AI Knows More Than You Think: A Privacy Comparison of Claude, ChatGPT, and Gemini

Every day you paste contracts, salaries, medical questions, legal documents, and business strategies into AI. You dictate voice notes to it. You trust it more than most of your coworkers.

But what happens to everything you tell it?

We read the privacy policies so you don't have to. Here's what we found — and why it's exactly the reason we built Wysor.


Consumer plans and enterprise plans are different products

Claude Pro. ChatGPT Plus. Gemini Advanced. Paying $20/month gets you better models and higher usage limits, but the data handling policies on consumer plans are largely the same as free tiers.

To get contractual privacy protections — DPAs, zero data retention, training exclusions — most providers require an enterprise agreement. That means a longer procurement process and higher pricing.

Wysor gives you those enterprise-grade protections on every plan, including free.


How long your conversations sit on their servers

Claude ProChatGPT Plus/ProGemini AdvancedWysor
Default retention30 days (or 5 years if training opted in)Retained while account is active18 months30 days max (OpenAI). Others: zero.
After you delete30 days to remove30 days to remove72 hours minimumMinimized via dedicated DPAs.

Claude keeps your data for 30 days by default. In October 2025, Anthropic made training opt-in the default — meaning if you didn't respond to the prompt, your conversations may be stored for up to 5 years.

ChatGPT retains conversations while your account is active. After you delete a conversation, OpenAI takes up to 30 days to fully remove it from their systems.

Gemini retains conversations for up to 18 months. Even with "Keep Activity" turned off, Google still holds data for at least 72 hours.

Wysor's dedicated DPAs with every provider minimize retention to the technical minimum — for most providers, that's zero. Your conversations live in your workspace, not on provider servers for months.


Training defaults vary by provider

By default, most consumer AI plans allow your conversations to be used for model improvement:

  • Claude: Opt-in since October 2025, but Anthropic defaulted it to "on" for users who didn't actively respond
  • ChatGPT: On by default. Can be disabled in Settings > Data Controls
  • Gemini: On by default. Disable "Keep Activity" to opt out

Enterprise and API tiers at all three providers typically exclude training by default. But individual and small-team users on consumer plans need to configure this themselves.

Wysor never trains on your data. Not on free. Not on paid. Not on any plan. It's contractual, not a toggle.


Human review practices across providers

All three major providers have some form of human review in their data pipeline:

Google is transparent about it — they explicitly advise Gemini users not to enter confidential information. Human-reviewed conversations may be stored for up to 3 years, disconnected from your account.

OpenAI states in their EU Privacy Policy that they may review content for safety and compliance purposes. Opting out of training does not opt you out of review. On API and enterprise tiers, these practices differ — OpenAI offers zero data retention for qualifying API customers.

Anthropic routes flagged Claude conversations to their Trust & Safety team. Flagged inputs and outputs can be retained for up to 2 years, with safety scores kept for up to 7 years.


What "delete" means at each provider

Deletion timelines vary, and in some cases data may persist longer than expected:

  • Gemini: If a conversation was selected for human review before deletion, that copy may persist for up to 3 years separately.
  • Claude: Flagged conversations are retained for up to 2 years regardless of deletion.
  • ChatGPT: Standard deletion takes up to 30 days. In certain legal proceedings (such as NYT v. OpenAI in 2025), courts have ordered providers to preserve conversation data — a legal reality that can affect any provider, not just OpenAI.

With Wysor, delete means delete. When providers don't hold your data beyond the minimum processing window, there's nothing to preserve.


AI conversations and legal privilege

In February 2026, United States v. Heppner established that conversations with AI assistants are not protected by attorney-client privilege and do not constitute work product.

The reasoning: provider policies allow potential disclosure to governmental authorities and may permit use for model improvement. The court found no reasonable expectation of confidentiality.

This precedent applies across providers — any AI conversation on a consumer plan is potentially discoverable in legal proceedings. Minimizing provider-side retention reduces this exposure.


Voice data handling

AI voice features are increasingly popular. Here's how each provider handles audio data.

Claude Voice

  • Audio sent to Anthropic's servers, then deleted within ~24 hours
  • Text-to-speech uses ElevenLabs (a third-party provider)
  • Speech-to-text provider not publicly disclosed
  • No public Zero Data Retention arrangement between Anthropic and ElevenLabs

ChatGPT Voice

  • Audio and video clips stored alongside the conversation for as long as the chat exists
  • Audio isn't used for training, but text transcripts may be if training is enabled

Gemini Voice

  • Google's "Ephemeral Learning" processes wake-word audio in RAM, but actual voice requests are sent to Google's servers
  • Google has faced litigation over voice data handling, including a $68M settlement related to privacy practices

Apple Siri

  • Server-processed requests: Apple stores transcripts for up to 2 years
  • On-device dictation stays local, but Apple collects metadata (request category, device info, performance stats)
  • "Improve Siri" opt-in allows Apple to keep voice recordings for up to 2 years
  • Apple settled a $95M lawsuit over inadvertent Siri activations

Wysor Voice

  • Everything happens on your device. Audio never leaves your phone
  • No servers. No third parties. No cloud processing. Works offline
  • We can't leak your voice data because we never receive it

Enterprise plans: strong protection, higher cost

All three providers offer enterprise tiers with significantly better privacy protections — DPAs, training exclusions, and in some cases, zero data retention. These are solid options for organizations with the budget and procurement resources.

Claude EnterpriseChatGPT EnterpriseGemini WorkspaceWysor
Zero data retentionAPI, by approvalAvailable for qualifying customersConfigurableDefault on every plan
Human reviewReduced (flagged data may still be reviewed)None with ZDRNoneNone by default.
Procurement processEnterprise salesEnterprise salesEnterprise salesSelf-serve signup

For OpenAI, where we're still finalizing a dedicated agreement, we explicitly disable their data storage on every single request we send — technically enforced, not just requested. Abuse logs may stay for up to 30 days.

Enterprise plans are the right choice for large organizations. But for individuals, small teams, and businesses that want the same protections without a procurement cycle, Wysor makes them accessible from day one.


What this means for you

Your employees are using AI every day — for contracts, strategy, client communications. The question is whether those conversations are protected by contractual guarantees or just default settings.

Wysor gives every user — from free to premium — access to GPT-5, Claude, Gemini, and Perplexity in one workspace, backed by dedicated DPAs that minimize provider retention, prohibit training, and minimize provider-side data access. Plus on-device voice transcription, email management, and a full audit trail.

Enterprise-grade privacy shouldn't require an enterprise contract.


Keep reading

Get started with Wysor →


All claims sourced from official privacy policies, terms of service, court documents, and provider documentation as of February 2026.