2026-02-20 · Don Ho · 1380 words

Your AI Legal Research Just Became Evidence Against You

On February 10, 2026, a federal judge in the Southern District of New York answered a question most lawyers had been quietly hoping would never come up: Is what you tell Claude protected by attorney-client privilege?

No. Not even close.

Judge Jed Rakoff's ruling in United States v. Heppner is the most consequential AI-related legal decision of the year so far, and most lawyers are still sleeping on it. The written opinion came down February 17. It runs 20-plus pages. It is not ambiguous.

Here is what happened. Bradley Heppner, a former financial-services executive facing federal securities fraud charges, used Anthropic's Claude to run prompts about the government's investigation and his potential legal exposure. He input facts he had learned from his attorney. Claude generated written responses. Those responses ended up on his laptop. Federal agents seized the laptop pursuant to a search warrant. Defense counsel claimed privilege.

Judge Rakoff disagreed, on every theory they tried.

Why the Privilege Failed on Every Ground

Defense counsel ran three arguments. All three lost.

The attorney-client privilege argument: Privilege protects confidential communications between a client and their attorney for the purpose of obtaining legal advice. Claude is not an attorney. Claude is not an agent of an attorney. Claude is a third-party AI platform. The moment Heppner typed into Claude's interface, he was communicating with Anthropic's system, subject to Anthropic's privacy policy. That policy does not create a confidential relationship. It does the opposite: it explicitly reserves Anthropic's ability to use inputs for safety monitoring and model improvement. The court's language is worth quoting directly: "No fiduciary relationship exists, or could exist, between an AI user and a platform such as Claude." Done.

The work product doctrine argument: Work product protection covers materials prepared in anticipation of litigation at the direction of counsel. Heppner's own lawyer conceded the key fact that killed this argument: counsel "did not direct Heppner to run Claude searches." Heppner did it on his own. The court found no work product protection for unsupervised, self-initiated AI research, even when the client is actively involved in litigation and the research concerns that litigation.

The subsequent sharing theory: Heppner's team argued that because he shared the Claude outputs with his attorney afterward, privilege attached retroactively. The court rejected this clearly. You cannot confer privilege on a document after it is created by routing it through your lawyer. Privilege attaches at the moment of communication, not at the moment of delivery.

The ruling covers three separate theories of protection, and three separate ways they all failed. This was not a close call.

The Anthropic Privacy Policy Did Real Work in This Opinion

This is the part most legal commentators have underplayed.

The court explicitly relied on Anthropic's privacy policy to defeat the confidentiality prong of the privilege analysis. The policy's language about potential monitoring and data use was evidence that the communications were not, in fact, kept confidential in the legal sense.

Think about what that means for your current AI stack. Every consumer AI product your team uses has a privacy policy. Most of those policies contain some version of language about safety monitoring, abuse detection, or model training. Every one of those policies is now potential ammunition in a privilege dispute if client information touches those systems.

This is not hypothetical. A federal judge just used an AI company's own terms of service to rule against a criminal defendant. Litigators on the other side now have a playbook.

What This Means If You Are Deploying AI for Legal Work

I have deployed AI systems across legal, lending, and compliance environments. The failure mode I see most often is not malice. It is well-intentioned convenience. Attorneys and paralegals use consumer AI tools because they are fast and accessible, often on personal devices, often without any organizational policy governing it.

The Heppner ruling creates three specific exposure categories worth auditing against right now:

1. Consumer AI for privileged communications. Any workflow where attorneys or clients use ChatGPT, Claude, Gemini, or similar consumer-facing tools to analyze privileged facts, litigation strategy, or client exposures is now on notice. The convenience argument does not survive a motion to compel.

2. Client self-help AI. Clients who use AI to research their own legal situations before or during representation, especially if they input facts learned from counsel, are generating documents that may be discoverable. Most engagement letters do not address this. They should.

3. The retroactive privilege theory is dead. If your current practice involves having clients or staff draft something in a consumer AI tool and then sharing it with counsel to "make it privileged," Heppner just closed that door explicitly.

The court was careful to note that it was not addressing counsel-directed use of a secure enterprise AI platform. That is the open question. The implication is that a lawyer directing a client to use a firm-managed, enterprise-grade AI tool with appropriate data handling agreements may land in a different place. But that situation requires: (a) explicit counsel direction, (b) an enterprise-grade tool, and (c) contractual data handling that does not defeat confidentiality. Most firms are not there yet.

The Practical Checklist for GCs and Litigation Teams

This ruling affects anyone who uses AI in connection with legal work. Here is what to assess now:

The Heppner ruling is a first-of-its-kind opinion from a federal court in one of the most important jurisdictions in the country. It will be cited. It will spread. And every litigation team that has been ignoring their AI governance gap just got a very expensive reminder that the gap matters.

If you are using consumer AI for anything touching client matters, that practice ends today. If you are not sure whether you are, that uncertainty is the answer.