2026-03-03 · Don Ho · 1150 words

78 State Chatbot Bills. 58 Lawsuits. And a Federal Deadline Eight Days Away.

By Don Ho, Esq. | March 3, 2026

On March 11, 2026, two federal deadlines converge. The Secretary of Commerce must identify which state AI laws the Trump administration considers "burdensome" to national AI leadership. The FTC must issue a policy statement on when state AI laws may be preempted by federal action. Those two documents, dropping eight days from now, could reshape the compliance landscape that in-house counsel have been building toward for two years.

Meanwhile, the current state of play is this: 78 chatbot-related bills across 27 states in the first weeks of 2026 alone. An analysis of 284 deployer-facing AI lawsuits shows chatbot wiretap claims grew from 2 matters in 2021 to 30 in 2025. California's SB 243, which took effect January 1, 2026, is already in force. Tennessee just passed a standalone law making it a Class A felony (15 to 60 years) to knowingly train AI to encourage suicide. Washington's SB 5984 passed the Senate with treble damages up to $25,000 per chatbot disclosure violation.

If your company runs a customer-facing AI chatbot, the exposure is real and it is accelerating.

Three Regulatory Models, One Product

State chatbot legislation is not uniform, and the differences matter operationally. Three distinct frameworks have emerged.

Disclosure-first. California's SB 243 requires operators to tell users they are interacting with AI, mandate periodic break reminders for minor users, and implement "reasonable measures" to prevent harmful content. The private right of action is $1,000 per violation. Washington's SB 5984 follows a similar framework but with hourly disclosure intervals for minors and treble damages up to $25,000.

Use-restriction. New York's S9051 goes further. For minor users specifically, it prohibits chatbots from using personal pronouns, expressing personal opinions, simulating emotional relationships, or prioritizing flattery over safety. The compliance challenge here is not procedural. It requires modifying what the system outputs for a user segment that is often difficult to reliably identify.

Criminal prohibition. Tennessee's SB 1493 creates felony liability for knowingly training AI to encourage suicide or simulate human emotional relationships. SB 1580, which the Tennessee Senate passed unanimously, prohibits AI systems from representing themselves as qualified mental health professionals. Tennessee is now the first state with a standalone AI mental health prohibition.

Oregon's SB 1546, which advanced to the Senate floor on February 12, goes further than any state currently on the books. The specifics are still developing, but the trajectory is clear: states are moving from soft disclosure requirements to hard behavioral prohibitions with criminal exposure.

The Litigation Wave Is Already Here

The legislative activity gets the headlines. The lawsuits are the actual threat.

Chatbot wiretap claims are filed under the Electronic Communications Privacy Act and state wiretap statutes. The theory: when a company's chatbot transmits conversation data to a third-party AI vendor (the underlying model provider), that transmission may constitute interception without user consent. The plaintiff's lawyers do not need to prove harm. They need to prove interception.

From 2 matters in 2021 to 30 in 2025 is a 1,400% increase in four years. That is not a trend that reverses itself.

The lawsuits are not all coming from fringe plaintiffs' firms. Consumer protection practices at major litigation shops have built teams around AI chatbot claims. The case law is developing fast, and the early defendants are companies that never thought their customer service chatbot was a litigation risk.

The common fact pattern: a company deploys a chatbot powered by an external LLM (often OpenAI, Anthropic, or a vertical model). Conversation data gets transmitted to that vendor for inference. The terms of service did not clearly disclose this. A wiretap claim follows.

What March 11 Changes (Or Doesn't)

The Trump administration's executive order on AI (January 2025) directed the Secretary of Commerce to identify state AI laws that burden national AI development. It also directed the FTC to clarify when federal standards preempt state regulation. Those deliverables are due March 11.

The administration has been broadly skeptical of state AI regulation. The Commerce report will almost certainly flag aggressive state laws like Oregon's SB 1546 and New York's S9051 as targets for federal preemption arguments.

But here is the complication: the executive order explicitly carved out "child safety protections" from federal preemption. Almost every chatbot bill in active legislation is framed around child safety. California's SB 243 is a chatbot safety bill with youth-specific protections. Washington's SB 5984 is built around minors. Tennessee's laws focus on AI interactions with vulnerable users. The carveout for child safety creates a category of state law that may survive federal preemption pressure entirely.

What this means practically: do not assume March 11 produces a clean federal preemption of state chatbot laws. Even if Commerce flags certain laws, preemption requires either express statutory language or an irreconcilable conflict with federal law. Neither exists right now. State law enforcement can continue while federal preemption gets litigated for years.

The Compliance Problem Is Technical, Not Just Legal

Most companies approaching AI chatbot compliance treat it as a terms-of-service and disclosure problem. Add a banner. Update the privacy policy. Done.

That approach covers California's SB 243 imperfectly and leaves New York and Tennessee exposure entirely unaddressed.

New York's use restrictions require the system to behave differently for minor users. That means age verification (or a conservative assumption that all users could be minors in certain contexts) and a modified system prompt or output filter that changes what the chatbot says. That is an engineering requirement, not just a legal one.

Tennessee's felony standard for knowingly training AI to encourage suicide requires companies to think about their fine-tuning and RLHF processes, not just their deployed product. If your team contributed to training data or custom fine-tuning of a model that surfaces harmful content to vulnerable users, the word "knowingly" is doing significant legal work.

Oregon's advancing bill goes further than any state currently on the books. GCs whose companies operate at scale in Oregon need to be tracking SB 1546 specifically.

What to Do Now

Map your chatbot data flows. For every customer-facing AI chatbot, document where conversation data goes. If it transmits to an external LLM vendor, that is your wiretap exposure. Review whether your disclosure language covers that transmission clearly and conspicuously.

Run a state law inventory. If your company does business in California, Washington, Tennessee, New York, or Oregon, each of those states has active chatbot legislation that requires specific compliance actions now or in the near term. Assign each law to a named owner with a compliance deadline.

Assess your minor-user exposure. If your chatbot is accessible to users under 18 in any context, California's $1,000-per-violation private right of action is already live. Washington's treble damages provision is moving. Build your minor-user detection and behavioral modification plan before those laws become enforcement reality.

Watch March 11 but don't bet on it. The Commerce report and FTC statement may shift the regulatory landscape. They may not. Plan as if state laws stand. If federal preemption materializes, you can ease off. The reverse is not true.

Audit your chatbot vendor contracts. If your LLM vendor is transmitting or retaining conversation data in ways you haven't disclosed to users, you have a wiretap exposure that starts in the contract. Get clarity on data flow, retention, and training data practices from every vendor in your stack.

The companies that are going to get hit hardest in 2026 are the ones treating chatbot compliance as a one-time disclosure exercise. It is a live, multi-state, multi-theory liability problem that is getting bigger every week.

Don Ho, Esq. is Co-Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses. When he's not navigating the intricate web of AI business policies and regulations, he's probably on dad duty or drinking a cuppa of Taiwanese oolong.