2026-02-23 · Don Ho · 1202 words

The AI That Screened Out 40-Year-Olds Just Became a Class Action

By Don Ho, Esq. | February 23, 2026

---

A federal judge in California just authorized notice in a nationwide class action against Workday. The case is Mobley v. Workday, Inc., and it is the clearest signal yet that AI hiring tools have moved from regulatory theory to active litigation. The allegation: Workday's AI-powered hiring software unlawfully screened out job applicants aged 40 and older. If you're in the class, you have until March 7, 2026 to opt in. If you're an employer using AI hiring tools, you have until right now to understand what this means for your exposure.

This is not a novel legal theory. The Age Discrimination in Employment Act has been law since 1967. What's new is the mechanism: an algorithm that encodes bias at scale, applied to hundreds of thousands of candidates, producing discriminatory outcomes faster than any human screener ever could.

---

How an AI Hiring Tool Becomes a Discrimination Engine

The plaintiff, Derek Mobley, applied to over 100 positions on the Workday platform. He was rejected each time. He is Black, over 40, and has a disability. His lawsuit claims Workday's AI screening tool systematically disadvantaged candidates who shared those characteristics.

In January 2026, Judge Rita Lin of the Northern District of California expanded the case into a nationwide class action. The scope is significant: this covers any applicant who applied through Workday's platform and was rejected by its AI screening tool, across all employers who used it.

Here is what makes this case structurally important: Workday is the vendor, not the employer. The companies using Workday's hiring software to screen candidates are the employers making the hire-or-reject decisions. But the lawsuit targets the tool provider directly, on the theory that the AI product itself is an employment agency or agent within the meaning of the ADEA.

Courts have not definitively settled whether AI tool vendors face direct liability under federal employment discrimination law. Workday is fighting that theory. This case may resolve it.

---

The Liability Gap That Every Employer Is Ignoring

Here is the actual problem for the thousands of companies using AI hiring tools right now.

Most procurement conversations about AI hiring software focus on efficiency. How much faster does the screening go? What is the false-positive rate? How does it integrate with the ATS? Almost none of those conversations include a rigorous analysis of adverse impact on protected classes.

The EEOC's guidance on AI and employment discrimination has been clear for years: if a selection procedure has adverse impact on a protected class, it is presumptively unlawful unless the employer can show it is job-related and consistent with business necessity. That standard does not care whether the selection procedure is a test, an interview, or an algorithm. Disparate impact is disparate impact.

The practical failure happens in three places:

During procurement. Employers rarely ask vendors for adverse impact analysis data. Vendors rarely volunteer it. The sales pitch is productivity. The compliance implications are footnoted.

During deployment. Even if the vendor provides some statistical validation, employers often deploy the tool across all roles without assessing whether the validation study matches their specific candidate pool or job requirements.

During audit. Most employers using AI hiring tools have never run a disparate impact analysis on their own hiring outcomes after deployment. They don't know whether their tool is producing discriminatory results because they haven't looked.

If you are currently using any AI-assisted hiring, screening, or scoring tool, the question your legal team should be asking is: what does our rejection rate look like by age, race, and gender? If you don't know the answer, you are operating blind.

---

What the Workday Case Signals for 2026

The class action against Workday is not happening in isolation. 2026 is the year employer AI liability goes from theoretical to operational.

New York City's Local Law 144 requires bias audits for automated employment decision tools used in hiring, with annual public reporting. Employers who skipped compliance are now in a much more exposed position as similar laws spread.

The EEOC's Strategic Enforcement Plan explicitly prioritizes AI and algorithmic discrimination as an enforcement focus. Agency resources are going toward exactly these cases.

Colorado's AI Act becomes enforceable June 30, 2026 and imposes obligations on deployers of high-risk AI systems, including employment-related AI. If you're in Colorado, the clock is running.

The pattern is consistent: regulators and plaintiffs' attorneys have identified AI hiring tools as a category with documented discriminatory potential, established legal theories to pursue it, and are actively building the case inventory. The Workday class action is a signal of the volume that's coming, not an outlier.

---

The Four-Step AI Hiring Audit

If you have AI anywhere in your hiring process, run this before your next hiring cycle:

Step 1: Inventory every AI touchpoint in hiring. Resume screening tools, automated video interview analysis, skills assessments with AI scoring, ATS ranking algorithms. If a machine is influencing who advances in your hiring pipeline, it goes on the list.

Step 2: Request adverse impact data from each vendor. Ask directly: has this tool been validated for adverse impact on protected classes? Under what conditions and with what candidate pool? Get it in writing. If the vendor can't or won't answer, that is your answer.

Step 3: Run your own adverse impact analysis. Pull your hiring outcomes for the past 12 months. Calculate selection rates by age group, race, and gender for each stage where AI is involved. If any protected group's selection rate is less than 80% of the highest group's rate (the EEOC's four-fifths rule), you have a potential disparate impact problem.

Step 4: Document your validation. If you have done the analysis and the tool passes, document it. If the tool fails, you need a business necessity justification or a new tool. Either way, the documentation is your defense if a charge is filed.

One more point worth making: the companies most exposed here are not just large enterprises. Any company using Workday, Lever, Greenhouse, or any other ATS with AI-scored screening has this exposure. Company size does not limit EEOC jurisdiction. A 50-person company that auto-rejected a 47-year-old qualified candidate because an AI ranked them below the cutoff has the same legal problem as a Fortune 500.

None of this is optional. The ADEA is 58 years old. The EEOC's position on AI and disparate impact is not new. The only thing that has changed is the scale at which these tools operate and the speed at which plaintiffs' attorneys have figured out how to build class actions around them.

The Workday case may or may not establish vendor liability. What it has already established is that employers who use AI hiring tools without auditing them are one EEOC charge away from a very expensive conversation.

---

Don Ho, Esq. is Co-Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses. When he's not navigating the intricate web of AI business policies and regulations, he's probably on dad duty or drinking a cuppa of Taiwanese oolong.