2026-02-21 · Don Ho · 973 words

The White House Just Told a State to Kill Its AI Child Safety Bill

The Trump administration sent a letter to a Utah Republican lawmaker last week telling him to kill his own AI transparency bill. The bill was bipartisan. The bill required AI companies to publish safety plans and protect children. The White House called it "an unfixable bill that goes against the Administration's AI Agenda."

Read that again. A federal administration told a state legislator from its own party to kill a child safety bill. Because it was too much AI regulation.

This is not a political story. This is a compliance story. And if you run a business, it affects you directly.

What the Utah Bill Actually Did

HB 286, the Artificial Intelligence Transparency Act, required large AI developers to:

The White House sent a memo to Utah Senate Majority Leader Kirk Cullimore Jr. on February 12, 2026, calling the bill "categorically opposed" to the administration's AI agenda. Days later, Utah's own governor, Spencer Cox, broke with the White House and said states should lead on AI policy.

This created the first known direct federal-state conflict over AI legislation.

The Federal Preemption Play

Trump signed an executive order in late 2025 directing the Department of Justice to create an AI Litigation Task Force. Its job: identify state AI laws that conflict with the federal approach and prepare to challenge them.

The administration's stated rationale is that a "patchwork of 50 different regulatory regimes" would make AI compliance too complex and stifle innovation. So the answer is no regime. Not one coherent federal framework — that does not exist yet. Just no state frameworks either.

For businesses, this creates a specific problem. The administration's position is that states cannot regulate AI, but Congress has not passed any federal AI law. The executive order is not law. The AI Litigation Task Force has not won any cases yet. So the legal landscape right now is:

You are operating in that gap.

Why This Matters More Than the Headlines Suggest

The Utah situation is the canary. Here is what it reveals about where federal AI policy is actually going.

The administration carved out child safety from federal preemption in the executive order. But when Utah passed a child safety AI bill, the White House still called it unfixable. The exception is apparently not the exception.

This tells you the administration's real position: no state AI regulation, regardless of what it covers or how carefully it is written. The carve-outs are window dressing.

For GCs and compliance teams, this creates a two-track problem. You cannot rely on state laws to set your compliance floor, because the federal government may challenge them. You also cannot rely on federal law to set your compliance floor, because no federal AI law exists.

The only rational response is to set your own standards above whatever floor exists, because the floor keeps shifting.

The Accountability Gap Nobody Is Naming

The deeper issue here is liability asymmetry.

AI companies benefit from the absence of federal and state regulation. If there are no mandatory safety standards, you cannot violate them. If there is no required safety plan, you cannot be found negligent for not having one.

But if your AI system harms someone, you can still be sued. The absence of regulation does not eliminate tort liability. Courts can still find negligence, product defect, or unfair business practices. The FTC can still investigate.

So the situation for AI companies is: no mandatory compliance requirements, but full exposure to civil liability and enforcement actions when things go wrong.

That is a trap disguised as freedom.

What to Do Right Now

Waiting for the regulatory environment to stabilize is a strategy. A bad one, but a strategy. Here is what actually works:

Document your AI safety standards now. Write down what you test, how you test it, and what thresholds trigger a review before deployment. Even if no law requires this today, the absence of documentation becomes a problem in litigation tomorrow.

Treat child safety as non-negotiable. The federal-state fight will eventually resolve. Child safety AI requirements will survive it in some form. Companies that already have policies in place will have a head start. Companies that treated the regulatory chaos as permission to do nothing will not.

Assume state laws will survive long enough to matter. The executive order has not been tested in court. The AI Litigation Task Force has not filed any challenges. States are moving forward. If you do business in California, Colorado, or Utah, treat their AI laws as likely to be enforceable for the foreseeable future.

Watch the California AG. California is not intimidated by the executive order. The AG has already sent a cease-and-desist to xAI, built a dedicated AI oversight unit, and has a track record of winning enforcement fights that the federal government tried to block. California will be the enforcement venue that matters most.

The regulatory patchwork the White House is trying to prevent is already here. The White House cannot preempt it by sending memos to state senators. It needs an actual law, and that law does not exist.

Until it does, your job is to operate as if every state has the right to regulate you. Because a federal court may eventually agree.