2026-02-18 · Don Ho · 1320 words

The AI Productivity Paradox Is Back, and Your Board Should Be Nervous

Companies have poured over $250 billion into AI. A new study of 6,000 executives says almost none of them have anything to show for it.

The Numbers That Should Scare Every CFO

The National Bureau of Economic Research published a study this month surveying 6,000 CEOs, CFOs, and senior executives across the U.S., U.K., Germany, and Australia. The headline finding: nearly 90% of firms report AI has had zero measurable impact on employment or productivity over the past three years.

Not "slight improvement." Not "still ramping up." Zero.

This is a $250 billion experiment with a control group that nobody planned. Companies spent the money. They hired the consultants. They ran the pilots. And when you ask the people signing the checks whether any of it mattered, nine out of ten say no.

Apollo's chief economist Torsten Slok put it bluntly: "AI is everywhere except in the incoming macroeconomic data." No movement in employment numbers. No movement in productivity data. No movement in inflation figures. Outside the Magnificent Seven tech companies, there are no signs of AI in profit margins or earnings expectations.

The executives who do use AI report spending about 1.5 hours per week with it. That's less time than most people spend in a single status meeting. A quarter of respondents said they don't use AI in the workplace at all.

We've Seen This Movie Before

Economist Robert Solow won a Nobel Prize partly for noticing this exact pattern 40 years ago. In 1987, he wrote: "You can see the computer age everywhere but in the productivity statistics."

The setup was identical. Transistors, microprocessors, integrated circuits. Revolutionary technology that promised to transform every workplace. Companies bought the hardware. They hired the IT departments. And productivity growth actually dropped, falling from 2.9% annually to 1.1%.

It took nearly two decades before computing delivered on its productivity promises. The technology had to mature, workflows had to be rebuilt from scratch, and an entire generation of workers had to learn new ways of operating. The hardware wasn't the bottleneck. The humans were.

That's what's happening with AI right now. The parallel is almost eerie. MIT researchers claimed in 2023 that AI could boost worker performance by nearly 40%. A follow-up MIT study from 2024 found the actual number was closer to 0.5% over the next decade. Nobel laureate Daron Acemoglu called it "disappointing relative to the promises that people in the industry and in tech journalism are making."

The Real Problem: Companies Are Buying Hammers Without Knowing What to Build

Here's what I see deploying AI across seven different industries: the technology works. The problem is that most companies treat AI like a new printer. They plug it in, hand it to their existing team, point it at their existing processes, and wait for magic.

That approach failed with computers in the 1980s. It's failing with AI today.

The companies getting actual ROI from AI are the ones doing something fundamentally different. They aren't adding AI to existing workflows. They're redesigning the workflow around what AI can do. That means rewriting job descriptions, restructuring teams, rebuilding processes from the ground up.

ManpowerGroup's 2026 Global Talent Barometer surveyed nearly 14,000 workers across 19 countries. AI usage increased 13% in 2025. But worker confidence in AI's usefulness dropped 18%. People are using the tools more and trusting them less. That's the clearest signal I've seen that the implementation problem runs deeper than the technology itself.

PwC's January 2026 survey of 4,454 CEOs found that 56% of companies are getting nothing measurable out of their AI investments. Not low returns. Nothing.

What Actually Works (From Someone Who's Deployed This)

I've built AI systems for lending companies, law firms, foreclosure services, restaurants, and consulting firms. The pattern is consistent across all of them.

What fails: Buying an AI tool, giving it to your team, telling them to "use AI more," and measuring nothing. This is the default playbook at most mid-market companies. A VP reads about AI in a McKinsey report, buys a few enterprise licenses, sends an email saying "start using AI," and then asks for results in the quarterly review. The team nods, opens the tool once, can't figure out how it fits their actual work, and goes back to doing things the old way. The license renews. Nobody measures anything. The CFO eventually asks why the AI line item keeps growing with no corresponding efficiency gain. Nobody has an answer.

What works: Picking one specific, measurable process. Mapping every step. Identifying where AI creates genuine leverage versus where it just creates a faster version of a bad process. Measuring before, during, and after. Training the humans who interact with the AI outputs. Building a feedback loop that catches failures before they hit clients.

Here's a concrete example. I helped a lending company automate part of their document review process. Before AI, a paralegal spent 45 minutes per file pulling data from loan applications, cross-referencing against compliance checklists, and flagging discrepancies. The first AI implementation saved about 10 minutes per file. Not bad, but nowhere near transformative. The breakthrough came when we redesigned the entire intake workflow so the AI could process documents before a human ever saw them, pre-flagging only the exceptions that required judgment. That cut the process from 45 minutes to 8 minutes. But the savings didn't come from better AI. They came from rebuilding the process around what AI is actually good at: reading structured data fast and catching pattern deviations. The AI was the same. The workflow was completely different.

The 10% of companies in that NBER study who ARE seeing results aren't using better AI. They're using AI better. The distinction matters more than any model upgrade.

Microsoft CEO Satya Nadella made this point at Davos last month, calling on business leaders to "reinvent the knowledge worker" rather than bolt AI onto 1990s workflows. He's right, but that message gets lost in the noise of every SaaS company claiming their AI feature will save you 40 hours a week.

What This Means for Your Budget Conversation

If you're in a board meeting this quarter defending AI spending, here's what the data actually supports:

Stop saying: "AI will transform our productivity." The macro data doesn't back you up, and your board members read Fortune.

Start saying: "We're investing in process redesign that happens to use AI." The difference sounds subtle. It's not. The first promises a technology miracle. The second promises a specific operational improvement that you can measure and hold people accountable for.

Budget accordingly: The AI tools themselves are getting cheaper every month. Anthropic just released Claude Sonnet 4.6 at the same price as its predecessor, with substantially better capabilities. The expensive part was never the software. It's the organizational change management required to make the software useful.

Every dollar you spend on AI licensing should be matched by two dollars on training, process redesign, and measurement infrastructure. If your AI budget is 90% software and 10% implementation, you're the statistic in that NBER study.

The Bottom Line

The AI productivity paradox isn't an argument against AI. The computing productivity paradox wasn't an argument against computers. Both are arguments against lazy implementation.

The technology works. I see it work every day. But "the technology works" and "your company will see results" are separated by a massive gap called organizational readiness. The 90% of executives reporting zero impact aren't evidence that AI is overhyped. They're evidence that buying a tool and deploying a tool are two very different things.

The companies that figure this out in the next 18 months will have a structural advantage that compounds for a decade. The companies that keep buying AI subscriptions and hoping for a productivity miracle will keep showing up in surveys as the 90%.

Which group are you in?