Can I Just Use the Free Version of ChatGPT or Claude?
It's the most reasonable question anyone asks before paying for an AI tool.
The free versions are good. They answer almost everything. So before signing up for anything paid, the honest question is: do I really need more than that?
We wanted to give you a real answer instead of a scary one, so we sat down and read the actual privacy policies of OpenAI (ChatGPT) and Anthropic (Claude). Both of them. Cover to cover.
Here's the honest version.
What the free versions actually keep
Let's just stick to what the policies say, in plain English.
When you use the free version of ChatGPT, OpenAI keeps:
- Everything you type, plus any files, images, audio, or video you upload.
- Your name, email, account info, and any profile details.
- Your IP address, browser, device, time zone, and how you use the product.
- Your general location from your IP. Precise location too, if you turn it on.
- Your contacts, if you ever connect them, including which of your contacts also use the service.
When you use the free version of Claude, Anthropic keeps:
- Every prompt and every response. The policy is direct: if you include personal data in a prompt, that gets collected, and it can show up in the response too.
- Your name, email, phone number, payment info if you ever upgrade.
- Your IP, device, browser, location derived from IP, and how you use the product.
- The whole conversation if you ever click thumbs up or thumbs down. That gets stored as feedback.
So even with a careful black marker through a few words, most of the metadata around your message is collected anyway, on both products, on the free tier.
"Will my conversation be used to train the AI?"
On both ChatGPT and Claude, the free version's default setting is yes. There is an opt-out. It's a setting you have to find and switch off yourself.
That means most people, who have never opened the privacy settings, are training the next model with their work. With their drafts. With the question they asked late at night.
This part is not a Wysor opinion. It's the policies.
OpenAI says it uses Content "to improve and develop our Services and conduct research, for example to develop new features," and explicitly mentions training the models that power ChatGPT, with an opt-out.
Anthropic says it "may use your Inputs and Outputs to train our models and improve our Services, unless you opt out through your account settings." It also notes two cases where the opt-out doesn't apply: anything flagged for safety review, and anything you submitted as feedback.
So whatever you paste, in a free account with default settings, is fair game for training.
"What about Temporary Chats?"
ChatGPT has a Temporary Chats mode. It looks like the obvious answer: nothing saved to your history, nothing used for training. Many people who care about privacy use it for exactly this kind of thing.
It's better than a normal chat. It's not the eraser most people assume.
OpenAI's own policy says Temporary Chats "will be automatically deleted within 30 days (unless we have to retain them for safety or legal reasons)." The chat does sit on their servers for up to 30 days first.
So in 2025, when a US federal court ordered OpenAI to preserve essentially all ChatGPT conversations as evidence in an unrelated lawsuit, that order swept in chats users thought were ephemeral. Regular users were not parties to the case. Their data was held anyway.
Temporary mode is a useful feature. It is not the same as the message never existing.
"Does upgrading to Plus or Pro fix any of this?"
Mostly no. The privacy policies for the free version, ChatGPT Plus, and ChatGPT Pro are the same consumer policy. The same is true for Claude.ai free and Claude Pro. Paying gets you more model access, more usage, more features. It does not move you to a different rulebook.
The rulebook only changes when a company buys an enterprise plan, signs an agreement, and goes through procurement. Then a different set of terms applies. That is the part most regular users do not see, and it's the part that matters.
"What if I just redact the sensitive bits?"
A common workaround. Black out the salary, swap the company name, then paste. Four reasons it works less than you'd hope.
Redaction has to be perfect every time. Miss one line, leave a name in a footer, and that bit is now sitting on a server you don't control. No undo. And it doesn't sit there alone, it sits attached to your account, linked to every other message you've ever sent.
The question gives the redaction away. "Is this non-compete enforceable in California for a senior engineer paid above the standard threshold?" You didn't type your salary, but you just told the model your role, your state, and that you earn above a specific number. Your name, your IP, your address, your device, and the rest of your message history are already attached to that profile. The redaction is cosmetic, the picture is already there.
Pictures carry their own confession. When you upload a photo, the file usually carries metadata with it. EXIF data: GPS coordinates of where the photo was taken, the exact timestamp, the make and serial number of the camera or phone. This is the same metadata law enforcement and forensic teams use to place a person at a scene. Most people never strip it before uploading. The visible content of the image is what you redacted. The invisible content is fully intact.
Stored doesn't mean sealed. Conversations sit in systems where safety reviewers, training pipelines, and engineers can open them. Google's Gemini help page tells users directly: "please don't enter confidential information that you wouldn't want a reviewer to see."
Redacting is fine for low-stakes stuff. It's not the protection most people think it is once what you're pasting actually matters.
Where Wysor lands
We didn't want to redact and hope. We wanted a tool we could just use.
So we built Wysor on top of the same models you'd use for free elsewhere. Claude. GPT-5. Gemini. Perplexity. Open-source. The difference is the agreement underneath.
- Your conversations are not used to train any model. Ever. Not on free, not on paid. It's in our contract with each provider, not a setting in your account.
- We use Zero Data Retention (ZDR) by default with most of the providers we route to. The provider processes your message, returns the answer, and deletes the request and response from their infrastructure. No 30-day window. No chat history sitting on someone else's server.
- Built-in research across legal and medical databanks. The kind of work you do in those domains, a contract clause, a drug interaction, a case lookup, is exactly the kind of work you don't want stored on someone else's server. With Wysor it isn't.
- Many of our models run on EU-hosted servers. Your message is answered in Europe and your data stays there.
- On the iOS app, voice transcription happens on your phone. The audio never leaves it.
You don't need to redact a contract before pasting it into Wysor, because the contract isn't being kept. You also don't need a procurement department to get any of this. You sign up and that's the default.
At a glance
| ChatGPT (free) | Claude (free) | Wysor | |
|---|---|---|---|
| Your conversations are kept | Yes, until you delete them | Yes, until you delete them | Zero data retention with most providers, by default |
| Used to train the model | On by default, opt-out available | On by default, opt-out available | Never, by contract |
| Deletion | Up to 30 days, longer if legally required | Up to 30 days from back-end | Provider deletes immediately under ZDR for most models |
| Where your data is processed | US and other countries | US, with EU transfer agreements | EU-hosted for many models |
| Voice notes | Stored with the chat | Sent to servers | On-device on iOS |
| What changes if you pay | Features. Privacy posture is the same | Features. Privacy posture is the same | Same privacy on free and paid |
So, can you just use the free version?
For everyday questions, yes. There's nothing wrong with using a free tool to ask a free-tool question.
The reason we built Wysor is that almost nobody only uses AI for free-tool questions. Once you've tried it for a few weeks, the contract goes in. The medical question goes in. The pricing strategy goes in. The thing you haven't told anyone yet goes in.
A black marker on the way in is not a bad instinct. It's just not the protection most people think it is. The protection is in what the company is allowed to do with what you sent, and that's set by the policy, not the redaction.
If you'd rather not have to think about this every time you open an AI tool, that's the gap we built Wysor to fill.
Keep reading
- Your AI Knows More Than You Think: A Privacy Comparison of Claude, ChatGPT, and Gemini
- Complete Privacy: Your Data Never Leaves Your Control
- Your Voice Notes Are Being Sent to Apple. Ours Aren't.
- Shadow AI: When Your Employees Use ChatGPT Behind Your Back
Sourced from OpenAI's consumer privacy policy and Anthropic's consumer privacy policy (effective 12 January 2026), read in full as of April 2026.



