A California Lawyer Just Got Hit With a $25K AI Sanction. The State Bar Is Watching.
Meta description: A California federal court ordered $25,000 in AI-related fee sanctions in February 2026. State bars are initiating disciplinary proceedings. Here's what the liability landscape actually looks like for lawyers and businesses using AI in 2026.
Published: February 19, 2026
By Don Ho
---
A California federal court ordered $25,000 in fee sanctions last week over AI errors in a copyright and contract dispute.
Not a hallucinated citation case. Not a fabricated precedent situation. A sanction tied to work product errors downstream of AI use that the opposing party had to redo.
$25,000 is not a career-ending number. But the trajectory is.
---
What's Actually Happening in the Courts
The Mata v. Avianca case in 2023 was the inflection point most people remember — the lawyer who filed a brief with six fictitious AI-generated case citations, got caught, paid $5,000 in sanctions. That case put "AI hallucinations" in the headlines.
What's happened since is more systematic and, in some ways, more dangerous.
Courts aren't just sanctioning hallucinated citations anymore. They're sanctioning the broader pattern of AI-assisted work that bypasses attorney judgment. The $25K California sanction is in that category. Work product with errors. Opposing counsel spent time correcting what should have been verified before filing.
The number went up. The bar for triggering it went down.
At the same time, state bars have started moving from "educational guidance" to actual disciplinary proceedings. The shift happened quietly in late 2025, but as of early 2026, using public AI tools for client work without meaningful human verification is documented as a clear ethical violation in bar guidance across multiple states.
California's Senate passed a bill this month specifically regulating lawyers' use of AI — including an explicit bar on putting confidential client information into public generative AI tools.
The legal profession's honeymoon period with AI is over.
---
The Liability Architecture Is Still Being Built
Here's the part that makes me uncomfortable as a GC: the liability rules for AI-assisted legal work are still being written in real time, by courts that are making it up as they go.
Right now, three different liability frameworks are competing:
Framework 1: AI as tool, lawyer is fully liable. The tool doesn't matter. If you sign off on work product, you own it. This is where most courts currently sit. It's what produced the sanctions in Mata, in the California case, and in a dozen other cases you haven't read about because they weren't high-profile enough to get press coverage.
Framework 2: AI as process, proportional liability. Courts look at what verification steps the lawyer took, not just the output. If you used AI to research but verified every citation before filing, you're in a different position than if you copied and pasted the AI's output directly into the brief. A handful of courts are starting to look at the process, not just the result.
Framework 3: AI vendor liability, shared accountability. Judge Rakoff's ruling in the Southern District of New York earlier this year signaled that AI-generated documents may not be privileged when created with consumer AI tools — because the third-party vendor relationship breaks the privilege chain. That decision cuts both ways. If the privilege fails because the AI company saw your document, maybe the AI company has some accountability for the output too. Nobody has successfully litigated this theory yet, but it's coming.
The confusion between these frameworks is the actual problem. Lawyers are making decisions about AI use inside a liability environment they can't fully see.
---
What "Human-in-the-Loop" Actually Means
Every AI policy guidance I've seen from state bars says some version of "meaningful human oversight." That phrase is doing a lot of work that nobody is unpacking.
In my experience deploying AI systems professionally, there are three versions of human-in-the-loop:
Version 1: Human on record. A lawyer signed the brief, therefore a human was "in the loop." This is the version that's getting people sanctioned. Signing something doesn't mean reviewing it. Courts are catching up to this distinction fast.
Version 2: Human as checker. A lawyer reviews AI-generated work before it goes out. This sounds like the right answer. The problem is that it assumes the lawyer can catch the AI's errors — which requires knowing what to look for. If the AI generates a plausible-sounding analysis in an area where the reviewing attorney isn't deeply expert, the "checker" may not catch what's wrong. This is the competence problem the bars haven't fully addressed yet.
Version 3: Human as architect. The lawyer designs the AI workflow, sets the parameters, defines what verification steps happen before output leaves the system, and reviews the final product with specific knowledge of where the AI is likely to fail. This is what "meaningful oversight" actually requires. Almost nobody is doing this yet.
The liability exposure gap is between Version 1 and Version 3. Most lawyers currently operating somewhere in Version 1-2 territory believe they're in Version 2-3.
---
The Numbers That Should Worry Business Leaders
This isn't only a lawyer problem. Any business using AI to produce client-facing work, regulatory filings, compliance documentation, or contract analysis has the same exposure architecture.
Consider: if your hallucination rate on production AI is 6%, and you're processing 100 client deliverables a month, you have 6 errors going out the door monthly. Some of those errors are cosmetic. Some of them are material. A small percentage of the material ones will eventually surface in a dispute.
The question isn't whether you'll have an AI error. You will. The question is whether you built the system so that errors stay contained or whether you built it so that errors become liability events.
In 2024, the cost of catching an AI error before it left the building was low. In 2026, the cost of catching it after it left the building is escalating. The $25K sanction in California is one data point. The state bar disciplinary proceedings being initiated against attorneys for consumer AI use are another. The Colorado AI Act hitting enforcement this June is another.
The environment is moving. The businesses that built AI workflows on 2023 assumptions are going to find out in 2026 and 2027 that those assumptions are wrong.
---
A Practical Framework for 2026
Three things that change your liability position:
1. Write your AI use policy down. This sounds obvious. Almost no firms or businesses have done it. A written policy that specifies what AI tools are approved, what they can be used for, what the verification requirements are, and who is accountable for outputs — that policy is your first line of defense when something goes wrong. It demonstrates process. Courts and bars care about process.
2. Distinguish between AI as research and AI as output. Using AI to surface information that you then verify and synthesize is different from using AI to generate the final document that goes to a client or a court. Your policy needs to specify which uses require what level of verification. The more client-facing or legally consequential the output, the more verification it requires.
3. Never put client confidential information into a consumer AI tool. Full stop. This one isn't gray. California codified it in statute. The bar guidance in most states is explicit. The Rakoff privilege ruling makes the legal theory clear. Consumer AI tools process your input. That processing may constitute disclosure. Disclosure may waive privilege. If you're using ChatGPT, Claude on a consumer plan, or any similar tool for client work, you're taking a privilege risk that most clients haven't consented to.
---
What Comes Next
Sanctions are getting larger. State bars are moving from guidance to enforcement. California, the state most likely to establish precedent, just passed legislation with explicit AI rules for attorneys.
The window where "we're still figuring this out" works as a defense is closing. Courts sanctioning attorneys for AI errors aren't treating it as an emerging area that requires special patience anymore. They're treating it the way they treat any other professional competence question: you're expected to know what you're doing with the tools you use.
If you're an attorney still using AI without a documented verification process, you're not ahead of the curve. You're behind it.
The $25K case won't be the last one you read about this month.
---
Don Ho is a 19-year attorney and AI systems operator. He serves as General Counsel at Stratus Financial and runs Kaizen AI Lab. He advises businesses on building AI systems that don't create legal exposure.