2026-04-03 · Don Ho · 1,200 words

A Lawyer in Oregon Just Got Hit With $109,700 in AI Sanctions. The Crisis Is Accelerating.

By Don Ho, Esq. | April 3, 2026

A federal court in Oregon ordered a lawyer to pay $109,700 in sanctions and costs last month for filing AI-generated errors in court documents. That may be the largest AI-related sanction against a single attorney in U.S. history. And it won't hold the record for long.

Damien Charlotin, a researcher at HEC Paris who maintains a worldwide tracker of courts sanctioning people for AI-generated errors, now counts more than 1,200 cases globally. About 800 of those are from U.S. courts. The rate is still increasing. On one recent day, he logged 10 new cases from 10 different courts.

This is not an isolated problem. This is a systemic failure in how the legal profession is adopting AI tools.

The Numbers Tell the Story

The MyPillow case last year got the most attention. Mike Lindell's lawyers were fined $3,000 each for filing briefs containing fictitious, AI-generated citations. That seemed like a big deal at the time. The Oregon sanction is 36 times larger.

In Nebraska, the state supreme court grilled Omaha attorney Greg Lake in February about a brief filled with citations to cases that don't exist. Lake told the justices he'd accidentally uploaded a working draft from a malfunctioning computer and denied using AI. The court wasn't convinced. They referred him for discipline.

In March, a similar scene played out in the Georgia Supreme Court. Same pattern: AI-generated citations, attorney denial, court skepticism, discipline referral.

The Washington Post reported this week that over 60% of surveyed judges have used AI in their work, including drafting rulings and preparing for hearings. The same tools that are getting lawyers sanctioned are being adopted by the people who sanction them. That's not hypocrisy. It's the reality of a technology that's embedding itself faster than the rules can keep up.

Why Lawyers Keep Getting Caught

The explanation is simple and uncomfortable: AI is very good at producing text that looks correct. Carla Wale, associate dean at the University of Washington School of Law, put it directly: "We have this issue because AI is just too good, but not perfect."

ChatGPT, Claude, and similar tools produce fluent, confident, professional-sounding legal analysis. The case citations look real. The reasoning reads like a well-drafted brief. The problem is that the citations may not exist. The cases may be fabricated composites of real decisions. And the legal reasoning may sound persuasive while being completely wrong.

The ethical obligation hasn't changed. Wale is emphatic on this point: "Whatever the generative AI tool gives you, you, under the rules of professional conduct, you have to read those cases. You have to read the cases to make sure what you are citing is accurate."

That's the baseline. And 800+ U.S. attorneys have failed to meet it.

The Labeling Debate Is Already Obsolete

Some courts have adopted AI labeling rules, requiring lawyers to disclose when AI was used in the preparation of court filings. Wisconsin requires lawyers to label anything produced with AI, including details about how it was used. The goal is to create a signal for judges about which filings need extra scrutiny.

Joe Patrice, senior editor at Above the Law, thinks those rules are already dead on arrival. His reasoning: AI is becoming integrated into virtually every piece of legal software. Microsoft Copilot is embedded in Word. Legal research platforms run on large language models. Document management systems use AI for search and organization.

"It's going to become so integrated into how everything operates that to be diligently complying with the rule, you would have to put on everything you put out, 'Hey, this is AI assisted,'" Patrice said. "At which point it kind of becomes a useless endeavor."

He's right. When every document is AI-assisted by default, labeling requirements don't create useful information. They create paperwork.

The Real Risk: Agentic AI and the Vanishing Middle

The current problem (lawyers submitting unchecked AI output) is bad. The next problem is worse.

AI companies are marketing "agentic" legal products that handle entire workflows end to end. Research, draft, cite, format, file. The human lawyer reviews the final output without seeing the intermediate steps.

Patrice identified the core danger: "I think once you obscure those middle steps, that's where mistakes happen. And even people who are well-meaning and not lazy will lose things because they weren't involved in that process."

This isn't about laziness. It's about the architecture of the tools. When an AI system handles research, analysis, and drafting as a single pipeline, the lawyer reviewing the output has no way to evaluate the quality of each step independently. They're checking a finished product, not auditing a process. That's how errors survive review.

The Business Model Problem

AI is also threatening the economics of legal practice. If AI can draft a brief in 20 minutes that used to take a junior associate 8 hours, the billable hours disappear. Patrice framed the choice bluntly: "There are two options. The lawyers can agree to take less (pause for laughter) or they can start finding a new way to bill."

The likely shift is toward outcome-based or flat-fee billing. But that creates its own pressure. If a lawyer is billing a flat fee for a motion, the economic incentive is to minimize time spent. And the fastest way to minimize time is to accept what the AI produces with minimal review.

That's the trap. AI creates time pressure. Time pressure reduces verification. Reduced verification produces sanctions. The cycle is already running.

OpenAI Is Now a Defendant

In March, Nippon Life Insurance Company of America sued OpenAI in federal court in Illinois. The insurance company claims it was the target of frivolous legal actions by a woman who was getting bad legal advice from ChatGPT. Among the claims: OpenAI was practicing law without a license.

OpenAI told NPR the complaint "lacks any merit whatsoever." Whether the case survives a motion to dismiss is an open question. But the theory (that AI companies bear some responsibility when their products are used for legal decision-making by non-lawyers) is going to be tested repeatedly over the next two years.

Right now, the liability model is simple: the lawyer who signs the filing takes the hit. The AI company that produced the hallucination faces nothing. The Oregon lawyer pays $109,700. OpenAI, Anthropic, and Google pay zero. That asymmetry won't last forever, but today, every lawyer using AI tools is the one holding the bag.

What to Do Now

If you're a practicing attorney: Every piece of AI-generated output gets verified against primary sources. Every case citation gets checked in Westlaw or Lexis. No exceptions. The cost of verification is a fraction of a $109,700 sanction.

If you manage a law firm: Build an AI use policy now. Require documentation of AI tool usage, mandate independent citation verification, and establish review protocols for AI-assisted work product. Make the policy enforceable, with consequences.

If you're a GC hiring outside counsel: Ask your firms about their AI policies. If they don't have one, that tells you something about their risk management. Add AI disclosure and verification requirements to your outside counsel guidelines.

If you're a law school: Wale at UW is designing AI ethics training. Every law school should be doing the same. The students graduating this spring are entering a profession where AI proficiency and AI skepticism are both required. Teach both.

The 1,200 cases on Charlotin's tracker are the beginning, not the peak. AI tools are getting more capable, more integrated, and more tempting to trust without verification. The sanctions will keep climbing. The only question is whether the profession adapts faster than the AI improves.

Don Ho, Esq. is Co-Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses. When he's not navigating the intricate web of AI business policies and regulations, he's probably on dad duty or drinking a cuppa of Taiwanese oolong.