Your ChatGPT Conversation Is Now Evidence
By Don Ho, Esq. | February 23, 2026
---
On February 10, 2026, a federal judge in the Southern District of New York did something that should make every business owner, GC, and compliance officer stop and reread their AI tool policy. Judge Jed Rakoff ruled that documents created in a public AI tool — ChatGPT, Claude, whatever the employee happened to be using that day — are not protected by attorney-client privilege. Not protected by work product doctrine either. The defendant typed sensitive legal strategy into a consumer AI platform. The government got every word.
This is United States v. Heppner. And it is not a narrow litigation footnote.
The Gibson Dunn and Morrison Foerster client alerts went out within 48 hours. Every major Am Law 200 firm sent a memo. The compliance podcast circuit is running Heppner episodes back to back. This is the ruling that the legal technology industry has been quietly dreading since generative AI became mainstream.
Here is what it means and what you need to do about it.
---
What Heppner Actually Held (and Why the Reasoning Is Airtight)
The facts are straightforward. The defendant, facing a federal investigation, used a third-party generative AI tool to draft and refine documents before eventually sharing them with his attorney. When prosecutors sought those documents in discovery, the defense argued privilege.
Judge Rakoff applied existing doctrine to new technology and the result was predictable to anyone who has read the ToS on a consumer AI tool:
The AI tool is not your attorney. Attorney-client privilege protects confidential communications between a client and their lawyer. The AI platform is a third party. Typing your legal strategy into ChatGPT is legally equivalent to typing it into a shared Google Doc with a stranger.
Consumer AI tools disclaim confidentiality. Read the terms of service on any major consumer AI platform. They retain input data. They use it for training. They share it in response to legal process. There is no reasonable expectation of confidentiality when you agree to terms that explicitly say otherwise.
Work product doctrine requires counsel's direction. Work product protects materials prepared in anticipation of litigation, at the direction of counsel. If an employee runs to ChatGPT on their own initiative before looping in a lawyer, there is no work product protection. The documents were created outside the attorney-client relationship.
Sending AI outputs to your lawyer retroactively does not create privilege. This one surprised people. Many assumed that forwarding the AI-generated drafts to counsel would bring them under the umbrella. It doesn't. The privilege attaches at creation, under the right circumstances. Not at transmission.
The court did not stretch the law. It applied it. That is why this ruling is more dangerous than a creative interpretation: it will be followed by every court that encounters similar facts.
---
The Business Problem Is Bigger Than One Criminal Case
Heppner involved a criminal defendant. The implications run well past that.
Every day, employees at companies of every size are doing the following:
- Pasting customer complaints and dispute details into ChatGPT to help draft response letters
- Running contract language through Claude to identify risk
- Summarizing financial data in an AI tool to prepare for board meetings
- Asking AI tools to analyze communications for patterns relevant to a pending HR investigation
- Using AI to draft responses to regulatory inquiries
In each of those scenarios, under the Heppner reasoning, those inputs and outputs are potentially discoverable. If litigation follows, the opposing party's counsel now has a roadmap: subpoena the AI platform, request production of any communications involving the AI tool, and argue that privilege never attached.
You might think: fine, we'll just use enterprise AI tools that guarantee confidentiality. That argument has merit. Enterprise contracts with Anthropic, OpenAI, Microsoft Copilot, and Google Workspace typically include data processing agreements that prohibit training on your inputs and provide confidentiality protections. Judge Rakoff's reasoning focused specifically on consumer tools with terms that "defeated any reasonable expectation of confidentiality."
But most companies are not operating with an enterprise AI policy that distinguishes between consumer and enterprise tools. Most companies have 15 different AI subscriptions across their team, half of them personal consumer accounts employees are using for work because the company hasn't rolled out anything better.
That is your exposure.
---
The Three-Tier AI Governance Model Every GC Needs Now
Heppner essentially forces businesses to build what I call a privilege-aware AI governance model. The core logic is simple: where your data goes determines whether your communications stay protected.
Tier 1: Open Consumer Tools (No Privilege, No Confidentiality)
ChatGPT free, Claude free, Gemini free. Personal accounts on any major AI platform. No enterprise agreement. Terms allow data retention and potential disclosure. Zero privilege protection for anything entered here. Policy should prohibit use of these tools for anything involving client data, legal strategy, pending litigation, regulatory matters, or HR investigations.
Tier 2: Enterprise AI with Data Processing Agreements
Microsoft Copilot (M365 enterprise), Google Workspace AI, Claude for Business with an enterprise contract, OpenAI for Enterprise. These tools provide contractual confidentiality. The inputs are not used for training. They are not subject to the same third-party disclosure argument. Work here should still be done at the direction of counsel for maximum protection, but you have a much stronger privilege argument.
Tier 3: Private/Self-Hosted AI
Models running on your own infrastructure, behind your own security perimeter. No third-party data sharing. Maximum privilege protection. Realistic only for larger organizations or specific high-sensitivity workflows.
The policy implication is direct: anything touching legal strategy, active disputes, regulatory investigations, or privileged client communications needs to happen in Tier 2 or Tier 3 only. If you don't have a written policy that maps these tiers to your actual tools, you don't have a policy.
---
What to Do Before Your Next Legal Matter
The audit your legal team should run this week. Not next quarter. This week.
Before you do that audit, recognize what the actual failure mode looks like in practice. It is not a rogue employee typing trade secrets into ChatGPT for fun. It is a competent, well-meaning employee who is trying to work efficiently. They have a dispute on their desk, a regulatory inquiry in their inbox, or an HR investigation to manage. They know AI tools make them faster. They use the one they have on their laptop. They may not even have a company-provided alternative. They type the facts in, get a draft response back, clean it up, and send it to counsel. That sequence — which plays out thousands of times a day across American businesses — just became a discovery vehicle.
Map every AI tool in use across the organization. Not just the ones IT approved. The ones employees are actually using. Survey your teams. Check your expense reports for AI subscriptions. You will find tools you didn't know about.
Classify each tool by tier. Consumer or enterprise? Does a data processing agreement exist? Has it been reviewed?
Identify the high-risk workflows. Where are employees most likely to use AI on legally sensitive matters? HR investigations, contract disputes, customer complaints that have escalated, regulatory correspondence — these are your risk areas.
Issue a written policy. It doesn't need to be 20 pages. It needs to clearly state which tools are approved for which types of work. "Don't use personal AI accounts for anything involving legal or compliance matters" is a start.
Brief your workforce. The employee who typed Heppner's legal strategy into a free AI tool almost certainly did not understand the privilege implications. Your employees don't either.
The Heppner ruling did not create new law. It confirmed that the old law applies to new technology in exactly the way you would expect. The companies that treat this as a fire drill will spend months scrambling. The companies that treat it as an operational signal will spend a week writing a policy.
---
Don Ho, Esq. is Co-Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses. When he's not navigating the intricate web of AI business policies and regulations, he's probably on dad duty or drinking a cuppa of Taiwanese oolong.