2026-02-18 · Don Ho · 1233 words

California Just Built an AI Enforcement Unit. If You Deploy AI, Read This.

While Congress argues about whether to regulate AI, California's Attorney General is building a team to investigate companies that already broke the rules. The xAI probe is just the beginning.

What Happened

California Attorney General Rob Bonta announced on February 17, 2026, that his office is creating a dedicated "AI oversight, accountability and regulation program." This isn't a study group or an advisory committee. It's an enforcement unit with investigators, legal staff, and subpoena power.

The announcement came during an active investigation into Elon Musk's xAI over Grok's generation of non-consensual sexually explicit images. Bonta's office had already sent a cease-and-desist letter last month. According to Reuters, Bonta said xAI "deflected responsibility" and still permits some sexualized content generation for paying subscribers.

His response: "Just because you stop going forward doesn't mean you get a pass on what you did."

Connecticut Attorney General William Tong joined the announcement, calling AI and social media harm "the consumer protection fight of our time" and comparing it to the opioid crisis in scale and urgency.

Why This Changes the Regulatory Landscape

For the past three years, AI companies have operated in a regulatory gray zone. Federal legislation has stalled repeatedly. The EU AI Act exists but enforcement timelines stretch into 2027 and beyond. State-level bills number in the hundreds, but most are still working through committees.

California just changed the equation. Instead of waiting for new legislation, Bonta is using existing consumer protection authority to investigate AI companies now. You don't need a new AI law to prosecute consumer harm. You need a team that understands how to identify and prove AI-specific harms under existing statutes.

This matters for three reasons:

First, California sets the template. When California moves on consumer protection, other states follow. The state's privacy laws became the de facto national standard because companies found it easier to comply everywhere than to maintain separate systems. The same dynamic applies to AI enforcement. Build your AI compliance for California standards, or build it twice.

Second, existing law is broader than you think. Companies focused on tracking new AI-specific legislation are missing the point. Unfair business practices, consumer protection, data privacy, employment law, and civil rights statutes all apply to AI deployments right now. California isn't creating new legal theories. It's applying old ones to new technology. That's faster and harder to challenge.

Third, the federal vacuum is deliberate. Bonta explicitly warned against granting Congress exclusive regulatory authority, citing its gridlock on data protection. California's position is clear: waiting for federal action isn't caution, it's abdication. States will fill the gap whether industry likes it or not.

The xAI Case as a Preview

The xAI investigation isn't just about explicit images. It's a test case for how AI enforcement works in practice. Watch the pattern:

The AG's office identified specific harm (non-consensual explicit images). They used existing authority to demand action (cease-and-desist). When the company's response was inadequate ("deflected responsibility," still permitting content for subscribers), they escalated to a formal investigation. Simultaneously, they built the institutional capacity (the new AI oversight program) to handle more cases at scale.

This is a playbook. And it works against any AI deployment that creates consumer harm, not just explicit content.

If your AI system generates outputs that affect consumers, employees, or the public, and those outputs cause harm, California's AG now has a dedicated team to investigate. They don't need to prove your AI is "dangerous" in some abstract sense. They need to prove it caused specific harm to specific people under existing consumer protection law.

What Companies Should Do Now

The window between "AI is unregulated" and "the AG is investigating your AI deployment" just closed in California. Here's what that means for your compliance posture:

Audit your AI outputs for consumer impact. If your AI makes decisions about pricing, lending, hiring, insurance, housing, or any consumer-facing process, document how those decisions are made, what guardrails exist, and how you handle errors. If you can't answer those questions today, you have a compliance gap that California (and soon other states) can investigate. This isn't hypothetical. I work with lending companies that use AI for loan processing. Every output that touches a consumer decision needs a documented review chain, an error-correction protocol, and a human in the loop for edge cases. The companies doing this well can demonstrate to any regulator exactly how a decision was made and who verified it. The companies doing it poorly can't explain their own process, and that gap is what gets you investigated.

Build an incident response plan for AI failures. Most companies have a cybersecurity incident response plan. Almost none have an equivalent for AI failures. If your AI system makes a wrong decision that harms a customer, how fast can you detect it? How do you notify affected parties? Who owns the remediation? These questions matter because enforcement actions often hinge not just on what went wrong, but on how the company responded after discovering the problem. Bonta's criticism of xAI centered partly on their deflection of responsibility. A company that discovers a problem, owns it, and fixes it fast looks fundamentally different to an AG than one that deflects and minimizes.

Document your AI governance. "We use AI" is not a governance framework. You need written policies on what your AI systems do, who oversees them, how errors are reported and corrected, and what data they access. The FTI/Relativity General Counsel Report released today shows 77% of GCs say they're proactive about information governance. If you're in the other 23%, you're behind.

Separate "enterprise AI" from "consumer AI" in your legal analysis. The Heppner v. US ruling from two weeks ago demonstrated that documents created with consumer AI tools aren't protected by attorney-client privilege because using consumer AI constitutes third-party disclosure. California's enforcement approach will likely create similar distinctions. Enterprise deployments with proper data controls face different regulatory exposure than consumer-tool integrations.

Watch the state you're in. Oklahoma advanced a bill yesterday criminalizing AI-generated content using someone's likeness without consent, with felony charges for harm exceeding $25,000. India's Supreme Court flagged AI-drafted legal petitions citing fictitious cases as "absolutely uncalled for." The global enforcement trend is accelerating, not slowing.

The Bigger Picture

The companies that will survive this enforcement era are the ones treating AI compliance as an operational requirement, not an afterthought. The AI Compliance Stack I wrote about last week covers the five layers every company needs: Inventory, Classification, Guardrails, Documentation, and Testing. Most companies stop at Inventory and call it done.

California just made layers two through five non-optional.

The FTI report's most telling statistic: 87% of General Counsel see accelerating risk and demand, but only 39% are using AI as part of their strategy to manage it. That gap between awareness and action is where enforcement thrives.

You have maybe 12 months before dedicated AI enforcement programs exist in 10 or more states. Use the time to build the compliance infrastructure you should have started building two years ago.

The AG isn't coming for companies that deploy AI responsibly. He's coming for companies that deployed AI without thinking about what happens when it goes wrong. Those are two very different categories, and right now is the last comfortable moment to figure out which one you're in.