Raise your hand if your company bought an AI training program in the last 18 months. Now keep your hand up if anyone actually changed how they work because of it.
Yeah. That’s what I thought.
Here’s what happened: somebody in leadership — maybe the CEO, maybe the CTO, maybe an enthusiastic VP who went to a conference — came back and said “we need AI training.” So the company bought a platform, rolled out a course, tracked who completed it, and moved on. Box checked. Budget spent. Progress declared.
Except nobody’s behavior changed. And now you’re wondering why.
The Pattern I Keep Seeing
I’ve had this conversation with enough companies at this point that I can almost predict the arc. It starts with the announcement — “we’re investing in AI readiness.” There’s usually an all-hands or at least an email. Then comes the LMS module: intro to generative AI, what is a prompt, here are some use cases. Maybe a “prompt engineering for beginners” course if they’re feeling ambitious.
Everyone completes it. Some people actually enjoy it. And then… nothing. The people who were already experimenting with AI are annoyed because they sat through content they passed six months ago. The people who are genuinely struggling are even more lost because the course moved too fast. And the people in the middle got a certificate they’ll never think about again.
Here’s the number that should bother you: less than 3% of the workforce are what researchers would call actual AI practitioners — people who use AI in their workflows in ways that produce measurable productivity gains. The rest are spread across a spectrum from “haven’t touched it” to “I use ChatGPT sometimes.” And most training programs treat that entire spectrum as one audience.
That’s not training. That’s a content delivery exercise.
I’m Saying This as Someone Who Did It Too
I want to be transparent here: we did this at Trust Insights. We built the 101 content. We created the introductory courses. And at the time, it was the right thing to do — people needed a starting point, and that foundational content served a real purpose.
But we’ve moved past it because the market has moved past it. The question is no longer “what is generative AI and should I care?” Most people have answered that for themselves, one way or another. The question now is: “I’m at point A with this technology, and I need to get to point B — and those two points are different for every single person in my organization.”
That’s a fundamentally different problem, and a generic course doesn’t solve it.
This Is a Leadership Problem Disguised as a Training Problem
Here’s where it gets uncomfortable. The reason most companies default to one-size-fits-all training is because the alternative requires actual leadership work that nobody wants to do.
To train people effectively, you first have to understand where they are. Not as a company. As individuals. You need to know who’s still at basic prompt-and-response chat, who’s already building custom AI agents, who’s operating agentic systems, and who hasn’t opened the tool at all. Those are different people with different needs, and the gap between them is enormous.
We developed a framework for this — six levels of generative AI proficiency, ranging from basic chat usage all the way through engineering custom agentic systems. And one of the most important things about it is the note at the bottom: higher isn’t inherently better. The goal isn’t to get everyone to level six. The goal is to match the proficiency level to the role.
An individual contributor who uses Deep Research and NotebookLM effectively is exactly where they need to be. A director who’s still at level one has a different problem — and it’s not a training problem. It’s a judgment and delegation gap that no course is going to fix.
But here’s the thing: you can’t make any of those distinctions if you haven’t done the assessment work first. And assessment is harder than buying a course. It requires understanding both the technology and your people. It requires conversations, not just completion rates.
This is the 10/20/70 reality. Successful transformation is roughly 10% algorithms, 20% technology, and 70% business process and people. Most AI training budgets are 100% allocated to the 10% and 20%, and exactly 0% allocated to the 70% that determines whether any of it sticks.
What Actually Works (And What We’re Doing Now)
The shift we’ve made — and what I’d recommend to any organization — is from curriculum delivery to proficiency-mapped development. That means a few things in practice:
Assess before you train. Figure out where your people actually are before you decide what they need. This sounds obvious and almost nobody does it. They skip straight to the content because the content is the easy part.
Stop running everyone through the same thing. The person who needs to understand what AI is and the person who needs to learn how to build multi-step automations are not in the same class. Stop pretending they are.
Personalize through practice, not just content. This is why we shifted to workshops. You can’t develop AI judgment by watching someone else use AI. You have to use it yourself, on your own problems, with someone experienced enough to tell you where your thinking went wrong. That’s the model that actually changes behavior.
Match the investment to the role. Not everyone needs the same depth. What matters is that each person is proficient enough for what their job actually requires — and that leadership can articulate what “proficient enough” looks like for each role.
Stop Measuring Completion. Start Measuring Capability.
This is the part that most training programs get fundamentally wrong, and it drives me crazy. The metric everyone tracks is completion. Did they finish the course? Did they pass the quiz? Did they get the certificate?
None of that tells you whether they can do anything differently on Monday morning.
The real question is capability: can this person apply AI in a way that makes their work better, faster, or more accurate? Can they evaluate an AI output and know when it’s wrong? Can they identify where AI fits in their workflow and where it doesn’t?
If you can’t answer those questions for your team, you don’t have a training problem. You have a leadership problem. Because it means nobody has defined what proficiency looks like for each role, and until somebody does, no training program — no matter how good the content is — is going to move the needle.
The Hard Part Is the Whole Point
Look, generic training programs aren’t evil. Some of them are genuinely good. The introductory content we built was good — it helped people get started, and I don’t regret building it.
But “getting started” was 2023’s problem. We’re past that now. The organizations that are actually seeing results from AI aren’t the ones with the best training platform. They’re the ones where leadership did the unglamorous work of understanding where each person is, defining where they need to be, and building intentional pathways between those two points.
That’s not a vendor problem. That’s not a budget problem. That’s a leadership problem. And the only people who can solve it are the leaders who are willing to do the hard part instead of buying another course and calling it done.