Which AI Tools Respect Your Privacy?

Jun 10, 2025By Ryan Flanagan
Ryan Flanagan

TL;DR
Enterprise versions of LLM's protect your data by default. Consumer versions make you opt out, if they let you at all.
Some models still send your chats to human reviewers, even after you’ve deleted them.

Privacy is a design choice.

Most people think:
“If I don’t give it sensitive information, it won’t become a problem.”

But that assumes the tool treats your data as private. Most don’t—unless you’re paying for that promise.

Privacy with AI isn’t just about whether you can turn something off. It’s about what’s happening before you even know the option exists.

If you’re using ChatGPT…

It depends on the version.

If you’re using the free or Plus version, your chats are part of OpenAI’s training data by default. You have to dig into Settings → Data Controls and switch off the “Improve the model for everyone” option to stop that. Even then, your chat history sticks around—unless you toggle on Temporary Chat, which deletes it after 30 days.

If you're on ChatGPT Enterprise, it's a different story. None of your prompts, files, or outputs are used for training—ever. You don’t need to change a thing. Your data stays your data.

So: free is convenient, but leaky. Paid enterprise plans are safer by design.

If you’re using Claude…

You’re in better hands than most.

By default, Claude doesn’t train on your data—unless you explicitly opt in by submitting feedback. That’s a major positive, especially for casual users or early testers.

But here’s the tradeoff: there’s no toggle to turn privacy settings on or off, because there are no settings. You’re trusting the default design. For some teams, that’s fine. Others might want more granular control—especially if you’re handling client material or regulated data.

If you’re using Gemini…

You need to be careful.

Gemini stores your conversation history by default. And even if you turn off that history, Google may still keep anonymised versions of your chats—for human review, for up to three years. This happens even after you delete them from your account. That’s not just a footnote. That’s a serious flag.

Enterprise users (on Google Workspace) get stronger boundaries: no training, no human review, no data leaving your org. But if you’re using the regular version, your inputs are likely being used to improve the model—and possibly reviewed by people outside your company.

In short: Gemini is the tool most likely to surprise you later.

If you’re using Perplexity…

It gives you options—but you have to use them.

By default, your questions and answers help train the model. But in Settings → AI Data Usage, you can turn this off. There’s also an Incognito mode that doesn’t store anything, and a Memory section where you can clear or disable personalisation entirely.

That’s more transparency than most tools offer. But again, the default is opt-in. You have to act to protect your data.

If you’re using Microsoft Copilot…

Check which version you’re on.

The enterprise version of Copilot (inside Microsoft 365) is built with clear walls. Your business data stays inside your tenant. It’s not used to train anything. It’s not shared outside your org. That’s a solid privacy posture.

But the consumer version is different. Chat history is saved for up to 18 months. Training is on by default (though personal identifiers are removed), and your voice inputs are treated like text. You can manage these settings in your account profile, but you need to go looking for them.

If you're unsure which version you're using you probably don’t have the protected one.

So, how secure is your data?

It depends on:

  • Which tool you're using
  • What version you're on
  • Whether you've changed any settings

And let’s be blunt: most people haven’t. Most teams are still experimenting with AI tools assuming they're sealed boxes. But many of them are more like public notebooks—with invisible readers and long memories.

Final thought

There’s no universal privacy standard in generative AI.
Some tools default to protection. Some default to training.
Some leave the door open for human review—long after you’ve closed the tab.

If you're experimenting with AI, it's not about banning tools. It's about knowing where the risks are and how to work around them.

Start by asking:

  1. What versions are we using?
  2. Who has access to our prompts and outputs?
  3. Have we changed the default settings?
  4. And if the answer to those is “not sure”

Start with an AI Readiness Assessment.

You’ll get a clear view of where your data’s going, and what to fix before it becomes a problem.