The Supreme Court Just Buried the Last Hope for AI-Only Copyright
By Don Ho, Esq. | March 3, 2026
On March 2, 2026, the U.S. Supreme Court declined to hear Thaler v. Vidal (docket 25-1317), the case that asked whether a work created entirely by an AI system can receive federal copyright protection. The Court's refusal to grant certiorari leaves in place the D.C. Circuit's March 2025 ruling: no human creativity, no copyright. Full stop.
Stephen Thaler, a computer scientist from St. Charles, Missouri, has been fighting this battle since 2018. His AI system, DABUS (Device for the Autonomous Bootstrapping of Unified Sentience), generated a visual artwork titled "A Recent Entrance to Paradise" without any human creative input (at least according to Thaler's own framing). He applied for copyright registration, the Copyright Office rejected it, a federal district court upheld the rejection in 2023, the D.C. Circuit affirmed in 2025, and now the Supreme Court has refused to intervene. The Trump administration urged the Court to pass on the case. They did.
This is the end of the road for purely autonomous AI authorship under current U.S. copyright law.
What "Declined" Actually Means
A cert denial is not a ruling on the merits. The Court did not say Thaler was wrong. But that distinction is largely academic at this point.
The D.C. Circuit's opinion is now the controlling federal appellate authority on the question. Its holding is unambiguous: the Copyright Act's use of the word "author" requires human authorship. The Copyright Office cited this as a "bedrock requirement" in its 2022 rejection, the district court echoed it, and the circuit court confirmed it. The Supreme Court letting that stand means there is no near-term path through the federal courts to change the rule.
For any business that was holding out hope that copyright protection for AI-generated outputs was around the corner, that hope is gone.
The Practical Consequence for Businesses Using Generative AI
Most companies using generative AI are not trying to copyright fully autonomous machine output. They are using tools like Midjourney, Adobe Firefly, or ChatGPT to assist human creators. That is a different legal category, and one where the Copyright Office has shown more flexibility. In the Zarya of the Dawn case (Reg. 1-5471 829927), the Office registered human-selected and arranged portions of a Midjourney-assisted comic while denying protection for AI-generated images. The principle: the human authorship must be identifiable and meaningful.
That framework matters for GCs advising creative, marketing, and product teams. The question is no longer "can AI output be copyrighted?" The question is "how much human creative input did your team contribute, and can you document it?"
Here is what that means operationally:
If a marketing team runs a prompt and accepts the first output without significant modification, that work likely has thin or no copyright protection. A competitor could copy it. You cannot sue them.
If a designer uses AI generation as a starting point, makes selection decisions, modifies elements, arranges components, and integrates the result into a larger work, the human-authored portions may be protectable. But only those portions.
The gap between "I used AI" and "I authored something with AI assistance" is where copyright lives now. That gap requires documentation.
Where the Open Questions Still Live
The Thaler cert denial closes one door but leaves several others open.
Human-AI collaboration. The Copyright Office has continued refining its position on hybrid works since its February 2023 guidance. There is no bright-line rule for how much human input is "enough." Each registration is evaluated on its facts. That case-by-case approach is going to generate litigation, and GCs whose companies are creating significant AI-assisted content need to track the developing standards.
Works made for hire. If a company's employees use AI tools to produce work product, the human authorship component still runs through the employees. The question of whether the employer holds valid copyright on AI-assisted work product requires analysis of both the human contribution and the work-for-hire doctrine. This has not been tested squarely.
International variation. The U.S. rule is "human author required." China has ruled differently. The EU is still sorting it out. Companies operating internationally may find that AI-generated content has copyright protection in some markets and none in others. That creates real licensing and enforcement complications.
AI training data. The copyright question that will dominate 2026 and beyond is not about AI outputs. It is about AI inputs. The litigation around whether AI companies infringed copyright by training on human-authored works (including Getty Images v. Stability AI and the New York Times litigation) is where the money is.
What to Do Now
Audit your AI content pipeline. Every marketing asset, product design, written output, or image generated with AI assistance needs to be reviewed. For each one, ask: what was the human contribution? Was it substantial enough to support copyright registration?
Update your IP documentation practices. When teams create AI-assisted work, log the prompts used, the iterations reviewed, the selections made, and the modifications applied. This is not just good practice. It is the evidence you will need if you ever want to register or defend a copyright.
Stop assuming AI outputs are protected. If your company has been treating AI-generated content as proprietary without registering it or without documenting human authorship, assume a competitor can legally copy that content. Adjust your competitive strategy accordingly.
Consider registration for high-value AI-assisted works. For creative assets that matter commercially, invest in the registration process. Even thin copyright protection is better than none. Work with outside IP counsel who has handled AI authorship questions specifically. This is not standard copyright work.
Don't hold out for Congress. Multiple legislative proposals to address AI and copyright have died in committee. The Copyright Office's official position is that current law does not support protecting autonomous AI output. Congress could change that, but there is no timeline.
The Supreme Court's refusal to hear Thaler's case is not a surprise. It is a confirmation. U.S. copyright law requires a human author, and no federal court in the country is going to change that right now. Build your content strategy around that reality.
Don Ho, Esq. is Co-Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses. When he's not navigating the intricate web of AI business policies and regulations, he's probably on dad duty or drinking a cuppa of Taiwanese oolong.