You're Training Your Replacement: How Cowork Uses Your Expertise
Technology

You're Training Your Replacement: How Cowork Uses Your Expertise

We've all internalized the old saying: "If a product is free, you're the product." So when AI tools started charging $20/month, many of us quietly relaxed. We're paying. We're the customer. Our data is probably fine.

That assumption breaks down once you look closely at how "agentic" tools like Anthropic's Cowork actually work.

Cowork is a new mode in Claude Desktop. Instead of a back-and-forth chat, you grant it access to your local files and let it perform tasks: organizing downloads, analyzing spreadsheets, drafting presentations, synthesizing research across dozens of documents. The marketing leans heavily on the fact that it runs "on your computer" and emphasizes "local" usage.

Used correctly, it's powerful. But the way it handles your data, and how that data can be used, deserves a clear explanation.


What "Runs Locally" Actually Means

Anthropic says Cowork "runs on your computer." Conversation history is "stored locally." There's a local VM that keeps things "isolated."

Most people will understandably read this as: my files are processed locally on my machine.

That's not what's happening.

The AI model itself does not run on your laptop. It runs in Anthropic's data centers. When Cowork reads your files, the contents are sent to Anthropic's servers for processing. What's "stored locally" is the chat log and session state, not the core model computation.

In other words: the fact that you see a desktop app and a local VM does not mean your documents never leave your device. They do. "Runs locally" and "processes locally" are very different statements from a privacy perspective.


Training: Opt-Out, Not Off by Default

On Claude Pro, your conversations are eligible for model training by default, unless you explicitly opt out. Anthropic moved to an opt-in flow in late 2025, but if you dismissed or skipped that prompt, your account may still be opted in.

Combine that with Cowork's behavior:

In a normal chat, you selectively paste content. In Cowork, you can grant access to entire folders: client proposals, financial models, strategy docs, internal memos, competitive analyses. That's a vastly larger and more sensitive dataset than a typical chat interaction.

Even when you disable training, your data is still sent to Anthropic's servers to fulfill your request. Turning off "Help improve Claude" addresses only one part of the lifecycle, not the broader questions of retention, access, and auditability.


Enterprise Plans Don't Fully Cover Cowork

It's reasonable to assume: "The consumer plans are loose, but Enterprise must be locked down. That's what the premium is for."

Anthropic's own documentation for Cowork complicates that expectation. They explicitly state:

"Cowork activity is not captured in Audit Logs, Compliance API, or Data Exports. Do not use Cowork for regulated workloads."

The audit logs and compliance APIs Enterprise customers rely on do not reflect Cowork activity. Data processed via Cowork falls outside normal export and oversight mechanisms. Anthropic itself warns against using Cowork for regulated workloads.

So while your organization may have a DPA, audits, and retention controls for core Claude usage, Cowork sits outside those controls unless and until Anthropic changes the integration.

This is the opposite of what many security and compliance teams assume: that new features inherit existing guardrails by default.


The Real Risk: You're Systematizing Your Own Expertise

The most important, and least discussed, dynamic is not simply that your documents are processed or even retained. It's how your expertise is being encoded.

When a consultant feeds recurring client work through Cowork, or an analyst uses it to structure proprietary research, or an executive runs internal strategy documents through it, they're not just using a tool. They're systematizing their own patterns: analytical frameworks, writing style and tone, domain-specific heuristics, industry knowledge accumulated over years.

If that data is used in model training, those patterns become part of the model's behavior. Future users can ask Claude to "write a competitive analysis for [your industry]" and receive output influenced by the patterns of previous users in that same domain. Users who never intended their frameworks to become a generic product feature.

It's a straightforward consequence of how large models learn from training data. The more specialized your work, the more valuable it is as training signal, and the more directly it can erode your differentiation if you're not careful about where that data goes.

In the 2010s, the trade was attention in exchange for free products and targeted ads. In the 2020s, the trade is rapidly becoming your expertise in exchange for model improvements.


Web Search: A Separate Egress Path

Even if you try to lock down network access inside Cowork, there's an important exception: web search.

Anthropic's docs note:

"Network egress permissions don't apply to the web search tool."

Queries Cowork issues on your behalf, what you're researching, which markets, which technologies, are still sent to external search providers. These queries can reveal business context like clients, industries, deal structures, and competitive landscapes, even if underlying documents remain constrained.

You may successfully restrict direct file egress while unintentionally exposing a detailed footprint of your research interests and patterns.


Why Consumer Plans Look the Way They Do

Enterprise offerings typically provide DPAs, training exclusions, stronger retention controls, audit logs and compliance tooling. Those are negotiated, contractual protections. They take time, lawyers, and budget.

By contrast, most early adopters of Cowork are on individual or small-team plans. Freelancers and independent consultants. Boutique agencies. Small internal teams and individual knowledge workers.

This group's data is disproportionately valuable: highly specialized, directly tied to revenue-generating expertise, dense with proprietary frameworks and tactics. Yet they're on plans that default to training unless you explicitly opt out, rely on UI toggles rather than enforceable contracts, and offer limited auditability and governance.

That's not necessarily malicious. It's how consumer SaaS has worked for years. But the business model depends on large volumes of high-quality user data, and individual experts are an especially rich source of that data.


How We Designed Wysor Differently

When we built Wysor, we started from a different premise: your expertise is an asset, not input fuel for someone else's model.

Before writing code, we put Data Processing Agreements in place with every AI provider we use. Not as a preference toggle, but as a baseline legal guarantee about how your data is treated.

Claude Pro + CoworkWysor
Training on your dataUI opt-out (historically defaulted to on)Contractually prohibited on every plan, including free
Audit trailNo audit logging for Cowork activityFull audit trail of activity and access
Data retention30 days to 5 years depending on contextMinimized to the strict technical minimum, defined in contracts
Enforcement mechanismSettings toggles and policy languageDPAs and commercial contracts with legal recourse

If you're a consultant, analyst, or operator whose documents are your differentiation, this matters. You should be able to use advanced AI without turning your unique patterns into a generic feature for everyone else.


If You're Using Cowork Today

Cowork can be genuinely useful. The point is not to avoid it entirely. It's to understand the tradeoffs and make informed choices.

  1. Check your training settings. In Claude: Settings, Privacy. Make sure "Help improve Claude" is off if you don't want your data used in training.

  2. Be deliberate about which folders you expose. Avoid pointing Cowork at directories containing your most sensitive client work, internal strategy, or regulated data, especially until Anthropic offers full audit logging for Cowork.

  3. Recognize the limits of the toggles. Opting out of training doesn't prevent all forms of data access or retention. Web search and other integrations can still reveal significant context about what you're working on.

  4. Prefer contractual protections where possible. For work that materially affects your clients or your competitive edge, use tools where privacy guarantees are backed by contracts, not just preferences.

Or use a workspace that starts with contractual privacy guarantees and designs around them, rather than asking you to accept them as an afterthought.


Keep reading

Protect your expertise →


All product descriptions and policy references are based on Anthropic's Cowork documentation, Claude privacy policies, and terms of service as of March 2026. Always review the latest official documentation before making decisions about sensitive or regulated workloads.