2026-02-17 · Don Ho · 1326 words

The Heppner Ruling Changed Everything: Why Your AI Workflow Just Became a Privilege Problem

By Don Ho, Co-Founder & CEO, Kaizen AI Lab

Published: February 14, 2026

TL;DR: A federal judge in the Southern District of New York ruled that documents generated through Claude aren't protected by attorney-client privilege. If your law firm uses consumer AI tools for client work, you have a privilege problem right now. I've reviewed the DPAs for three of my own deployments since this ruling. Two of them failed. The fix is architectural: self-hosted models, zero-retention API agreements, and data processing addendums with explicit privilege protections.

---

A Federal Judge Just Blew a Hole in Your AI Workflow

US v. Heppner landed in February 2026 out of the Southern District of New York. Judge Rakoff. The ruling was clean, direct, and devastating for every law firm running client work through consumer AI tools.

The government subpoenaed documents that a defense attorney had generated using Claude, Anthropic's AI assistant. The defense argued privilege. Judge Rakoff disagreed.

The reasoning wasn't complicated. AI tools aren't attorneys. Anthropic's privacy policy allows data collection and government disclosure. When you voluntarily share case details with a platform operating under those terms, you waive confidentiality. Period.

Read that again. Anything you typed into Claude about a client matter can be subpoenaed. The documents it generated for you aren't privileged. The case strategy you workshopped with an AI chatbot is now potentially discoverable.

Why This Matters Beyond One Ruling

This comes from Judge Jed Rakoff in the SDNY, one of the most influential trial judges in the federal system. His opinions carry weight. Other courts will cite this.

The logic extends beyond Claude. It applies to ChatGPT, Gemini, Copilot, and every other consumer AI platform with terms of service permitting data collection. Which is all of them.

According to a 2025 Thomson Reuters survey, over 60% of law firms reported using generative AI tools for some aspect of client work. Most were using consumer-grade products. Free tiers. Browser-based interfaces. No enterprise agreements. No data processing addendums.

Every one of those firms just discovered they may have been waiving privilege on client matters for the past two years.

Here's My Controversial Take

Heppner didn't go far enough. The ruling addressed consumer AI tools with permissive privacy policies. But the logic should extend to any cloud-hosted AI where the vendor retains even temporary processing rights.

Enterprise API agreements with "zero retention" clauses still route your data through third-party infrastructure. Your privileged communication hits their servers, gets processed, and allegedly gets deleted. But the transmission itself is a disclosure to a third party. If a subpoena arrives during that processing window, even a brief one, your privilege argument gets complicated.

I think we'll see a ruling within 18 months that challenges privilege even for enterprise-tier API usage. The only truly privilege-safe architecture is one where privileged data never leaves infrastructure you control. Self-hosted or nothing. Every other option is a calculated risk.

Some attorneys will call that extreme. I'd rather be extreme than be the next case study.

The Three Conditions That Kill Privilege

Judge Rakoff's analysis hinged on three elements that apply across consumer AI platforms:

Third-party operation. The AI platform is operated by someone other than the attorney or client. Anthropic, OpenAI, Google: none of them are your agent. They're service providers with their own data practices and legal obligations.

Terms permit data access. Most AI platforms include provisions allowing the company to access, store, and process user inputs. Some include cooperation with government requests. When the terms say they can look at your data, you've introduced a third party into what was supposed to be confidential.

Voluntary disclosure. Nobody forced the attorney to use Claude. Voluntary disclosure to a third party is waiver. Full stop.

These three conditions exist for essentially every consumer AI tool on the market. Heppner didn't create new law. It applied existing privilege doctrine to new technology. That's what makes it so hard to challenge on appeal.

What Firms Are Getting Wrong

The knee-jerk reaction from some firms has been to ban AI entirely. Wrong response. That's the equivalent of banning email in 2001 because someone sent confidential documents over an unsecured connection.

AI provides genuine efficiency gains for legal work. Research, drafting, document review, case analysis. The productivity improvements are real and measurable. Firms that abandon AI entirely will fall behind firms that use it correctly.

The problem was never the AI. The problem is the architecture around it.

Using consumer-grade tools for enterprise work. The free tier of ChatGPT and the consumer version of Claude were designed for individual users writing recipes and planning vacations. Using them for client work is like discussing case strategy on a speakerphone in a crowded restaurant.

Treating "privacy policies" as security guarantees. A privacy policy tells you what the company can do with your data. Most firms never read them. The ones who did often conflated "we don't actively sell your data" with "your data is protected." Very different statements.

No data governance layer. Most firms have no system for classifying which work can go through external AI and which must stay internal. Everything goes into the same chatbot.

The Fix: Privilege-Safe AI Architecture

The solution is architectural. Privileged communications never touch third-party servers with permissive privacy policies.

Self-Hosted Models

Run open-source models (Llama, Mistral, or others) on your own infrastructure or in a private cloud environment you control. Data never leaves your servers. No third-party privacy policy applies because there's no third party.

In 2026, this is increasingly accessible. Models that run efficiently on standard hardware are available. Managed private cloud deployments from AWS, Azure, and GCP offer enterprise-grade hosting with your encryption keys and your access controls.

Zero-Retention API Agreements

If self-hosting isn't feasible, use enterprise API tiers with zero-retention agreements. Both OpenAI and Anthropic offer enterprise agreements where they commit to not storing, training on, or accessing your inputs and outputs.

The critical difference: these must be explicit, written contracts. Not a checkbox on a terms of service page. They must include specific provisions about data retention, training exclusion, and government disclosure obligations.

Get your data processing addendum reviewed by someone who understands both technology contracts and privilege law. Most standard DPAs weren't drafted with attorney-client privilege in mind.

Data Classification System

Before any AI tool touches any piece of work, you need a classification system:

Audit Trail

Every interaction with every AI tool needs to be logged. Which attorney. Which tool. What classification level. When. You need this for compliance and to demonstrate to courts that you have a systematic approach to protecting privilege.

What Happens Next

Heppner will be cited. Other courts will follow. Some might disagree with the reasoning, but the trend line is clear. Courts are going to scrutinize how lawyers use AI tools, and the analysis will center on the same questions: Was there a third party? Did the terms permit access? Was the disclosure voluntary?

The firms that will be fine are the ones building proper infrastructure now. Self-hosted models. Enterprise agreements with real legal teeth. Data classification systems. Audit trails.

The firms that will be in trouble are the ones reading this article and thinking "we'll deal with it later."

Later is going to be expensive.

---

Kaizen AI Lab helps organizations build AI systems with compliance and data governance built in from day one. If your firm needs help designing a privilege-safe AI architecture, we can help.

Take the AI Compliance Readiness Assessment: acra.kaizenailab.com

Learn more: kaizenailab.com

Book a call: cal.com/dhoesq/kaizen