Author: Katie Robbert

  • AI Ate the Proving Ground: The Leadership Crisis Nobody Sees Coming

    Everyone’s talking about AI killing the billable hour. Outcome-based pricing, gain share models, managed services — it’s all over LinkedIn, and it’s all valid. But it’s also the wrong conversation.

    The billable hour wasn’t just a revenue model. It was the training system. And that’s the part nobody’s mourning.

    Here’s What I Mean

    For decades, the way you became a senior consultant — or a senior anything in knowledge work — was by doing the work. The tedious, repetitive, detail-heavy work. You pulled the data. You built the decks. You sat through 47 stakeholder interviews and watched how the senior partner navigated the room when things got tense.

    You didn’t learn judgment from a training program. You learned it from reps. Thousands of hours of reps.

    AI just automated the reps.

    McKinsey’s internal tool, Lilli, can now do roughly 80% of what a junior analyst used to do — scanning documents, drafting summaries, building slides. BCG’s tool polishes presentations automatically. Deloitte is using GPT-based tools to write proposals. And the firms are responding exactly the way you’d expect: PwC cut graduate hiring by 30%. The Big Four have frozen entry-level salaries for three consecutive years. AI roles now outnumber entry-level consultant positions.

    The consulting pyramid — wide base of juniors, narrow top of partners — is becoming what researchers are calling “the obelisk.” Tall, narrow, fewer people at the bottom.

    And if you’re looking at that and thinking “good, efficiency,” I’d ask you to zoom out.

    The Three-Sided Squeeze

    Because the junior pipeline drying up isn’t happening in isolation. It’s happening at the same time as two other things:

    Clients are poaching the seniors. The in-housing trend is well-documented at this point. Companies bring in consultants for the initial build, learn the playbook, hire their own people, and cut the cord. The consultants who do have experience and judgment are getting absorbed by the client side. Consulting firms are being forced into “capacity building” roles — basically training their replacements — just to keep the engagement going.

    The market doesn’t want generalists anymore. Budget authority for AI is shifting from the CIO to line-of-business leaders — your CMOs, your COOs — and they want vertical expertise tied to measurable outcomes. The well-rounded generalist consultant, which is exactly the type of person the old training model was designed to produce, is exactly what the market is no longer buying.

    So you’ve got AI replacing the bottom, clients poaching the top, and a market that doesn’t want what the middle produces. That’s not a pricing problem. That’s a “who’s going to be left to run these firms in 2030?” problem.

    “We’ll Figure It Out” Is Not a Strategy

    Here’s what bothers me most about this. The response from most firms, based on every piece of research I’ve read, is essentially: hand the juniors some AI tools and hope they pick up higher-order skills on their own.

    That’s not a plan. That’s wishful thinking dressed up in a technology budget.

    The proving ground — the place where future leaders actually developed the instincts and judgment to lead — was the work itself. The grind was the training. When you remove the grind without replacing it with something intentional, you don’t get more efficient leaders. You get people who can operate AI tools but have never had to develop the critical thinking that makes those tools useful.

    Think about it this way: if you’ve never had to manually pull apart a dataset to understand why the numbers don’t add up, how do you know when the AI’s output is wrong? If you’ve never sat in a room where a client pushed back hard on your recommendation, how do you develop the instinct to read the room? These aren’t skills you learn from a tutorial. They come from doing the work, making mistakes, and having someone more experienced show you where you went sideways.

    This Isn’t Just a Consulting Problem

    And here’s why this matters even if you’ve never hired a consultant in your life.

    Every knowledge-work industry has its own version of this proving ground. Law firms had junior associates who learned by reviewing thousands of documents. Marketing teams had analysts who built intuition by manually pulling reports week after week. Finance had junior traders who learned market feel through repetitive modeling.

    AI automated the repetitive work in all of these fields. And the efficiency gains are real — I’m not arguing they aren’t. But the question nobody’s asking is: where does the next generation of CMOs, managing directors, and senior partners come from if the work that trained them doesn’t exist anymore?

    This is the 10/20/70 problem. Research consistently shows that successful transformation is 10% algorithms, 20% technology, and 70% business process and people. We’re spending all our energy on the 10% and 20%, and almost none on the 70% that actually determines whether any of this works.

    So What Do We Do About It?

    I’m not going to pretend I have the whole answer, but I do know what the answer isn’t: generic AI literacy programs. Slapping a “prompt engineering for beginners” course on your LMS and calling it workforce development is not going to close this gap. I’ve seen too many of those programs, and they produce people who can write a decent prompt but still can’t think critically about the output.

    What has to change is that the training model needs to become intentional instead of incidental. The old system worked because the learning was baked into the work — you didn’t have to design it, it just happened. That luxury is gone. Now somebody has to actually design the experiences that develop judgment, critical thinking, and strategic instincts. And that “somebody” is leadership.

    A few things I think need to happen:

    Redefine what “entry-level” means. The new junior role isn’t “do the research.” It’s “evaluate whether the AI’s research is trustworthy and figure out what’s missing.” That’s a fundamentally different skill set, and most organizations aren’t hiring for it or developing it.

    Build structured learning into the work, on purpose. If the reps aren’t happening organically anymore, you have to create them. That means rotational programs, case-based learning, and — critically — pairing junior people with experienced practitioners who can show them why the AI’s answer is only 60% of the story.

    Stop treating the people problem as a technology problem. New tools don’t fix a broken development pipeline. If your answer to “how do we train the next generation of leaders?” starts with a software purchase, you’re solving the wrong problem.

    The Uncomfortable Truth

    The consulting industry — and every knowledge-work industry behind it — is about to find out what happens when you optimize for efficiency without thinking about development. The gains are real today. The leadership vacuum shows up in five years.

    And by the time you notice it, the people who could have filled those roles will have gone somewhere else, built their own thing, or never developed the skills in the first place.

    Nobody held a funeral for the billable hour. Maybe they should have. Not because the billing model mattered, but because nobody stopped to think about what else we were burying along with it.

  • Your AI Training Program Is Failing Because You Skipped the Hard Part

    Raise your hand if your company bought an AI training program in the last 18 months. Now keep your hand up if anyone actually changed how they work because of it.

    Yeah. That’s what I thought.

    Here’s what happened: somebody in leadership — maybe the CEO, maybe the CTO, maybe an enthusiastic VP who went to a conference — came back and said “we need AI training.” So the company bought a platform, rolled out a course, tracked who completed it, and moved on. Box checked. Budget spent. Progress declared.

    Except nobody’s behavior changed. And now you’re wondering why.

    The Pattern I Keep Seeing

    I’ve had this conversation with enough companies at this point that I can almost predict the arc. It starts with the announcement — “we’re investing in AI readiness.” There’s usually an all-hands or at least an email. Then comes the LMS module: intro to generative AI, what is a prompt, here are some use cases. Maybe a “prompt engineering for beginners” course if they’re feeling ambitious.

    Everyone completes it. Some people actually enjoy it. And then… nothing. The people who were already experimenting with AI are annoyed because they sat through content they passed six months ago. The people who are genuinely struggling are even more lost because the course moved too fast. And the people in the middle got a certificate they’ll never think about again.

    Here’s the number that should bother you: less than 3% of the workforce are what researchers would call actual AI practitioners — people who use AI in their workflows in ways that produce measurable productivity gains. The rest are spread across a spectrum from “haven’t touched it” to “I use ChatGPT sometimes.” And most training programs treat that entire spectrum as one audience.

    That’s not training. That’s a content delivery exercise.

    I’m Saying This as Someone Who Did It Too

    I want to be transparent here: we did this at Trust Insights. We built the 101 content. We created the introductory courses. And at the time, it was the right thing to do — people needed a starting point, and that foundational content served a real purpose.

    But we’ve moved past it because the market has moved past it. The question is no longer “what is generative AI and should I care?” Most people have answered that for themselves, one way or another. The question now is: “I’m at point A with this technology, and I need to get to point B — and those two points are different for every single person in my organization.”

    That’s a fundamentally different problem, and a generic course doesn’t solve it.

    This Is a Leadership Problem Disguised as a Training Problem

    Here’s where it gets uncomfortable. The reason most companies default to one-size-fits-all training is because the alternative requires actual leadership work that nobody wants to do.

    To train people effectively, you first have to understand where they are. Not as a company. As individuals. You need to know who’s still at basic prompt-and-response chat, who’s already building custom AI agents, who’s operating agentic systems, and who hasn’t opened the tool at all. Those are different people with different needs, and the gap between them is enormous.

    We developed a framework for this — six levels of generative AI proficiency, ranging from basic chat usage all the way through engineering custom agentic systems. And one of the most important things about it is the note at the bottom: higher isn’t inherently better. The goal isn’t to get everyone to level six. The goal is to match the proficiency level to the role.

    An individual contributor who uses Deep Research and NotebookLM effectively is exactly where they need to be. A director who’s still at level one has a different problem — and it’s not a training problem. It’s a judgment and delegation gap that no course is going to fix.

    But here’s the thing: you can’t make any of those distinctions if you haven’t done the assessment work first. And assessment is harder than buying a course. It requires understanding both the technology and your people. It requires conversations, not just completion rates.

    This is the 10/20/70 reality. Successful transformation is roughly 10% algorithms, 20% technology, and 70% business process and people. Most AI training budgets are 100% allocated to the 10% and 20%, and exactly 0% allocated to the 70% that determines whether any of it sticks.

    What Actually Works (And What We’re Doing Now)

    The shift we’ve made — and what I’d recommend to any organization — is from curriculum delivery to proficiency-mapped development. That means a few things in practice:

    Assess before you train. Figure out where your people actually are before you decide what they need. This sounds obvious and almost nobody does it. They skip straight to the content because the content is the easy part.

    Stop running everyone through the same thing. The person who needs to understand what AI is and the person who needs to learn how to build multi-step automations are not in the same class. Stop pretending they are.

    Personalize through practice, not just content. This is why we shifted to workshops. You can’t develop AI judgment by watching someone else use AI. You have to use it yourself, on your own problems, with someone experienced enough to tell you where your thinking went wrong. That’s the model that actually changes behavior.

    Match the investment to the role. Not everyone needs the same depth. What matters is that each person is proficient enough for what their job actually requires — and that leadership can articulate what “proficient enough” looks like for each role.

    Stop Measuring Completion. Start Measuring Capability.

    This is the part that most training programs get fundamentally wrong, and it drives me crazy. The metric everyone tracks is completion. Did they finish the course? Did they pass the quiz? Did they get the certificate?

    None of that tells you whether they can do anything differently on Monday morning.

    The real question is capability: can this person apply AI in a way that makes their work better, faster, or more accurate? Can they evaluate an AI output and know when it’s wrong? Can they identify where AI fits in their workflow and where it doesn’t?

    If you can’t answer those questions for your team, you don’t have a training problem. You have a leadership problem. Because it means nobody has defined what proficiency looks like for each role, and until somebody does, no training program — no matter how good the content is — is going to move the needle.

    The Hard Part Is the Whole Point

    Look, generic training programs aren’t evil. Some of them are genuinely good. The introductory content we built was good — it helped people get started, and I don’t regret building it.

    But “getting started” was 2023’s problem. We’re past that now. The organizations that are actually seeing results from AI aren’t the ones with the best training platform. They’re the ones where leadership did the unglamorous work of understanding where each person is, defining where they need to be, and building intentional pathways between those two points.

    That’s not a vendor problem. That’s not a budget problem. That’s a leadership problem. And the only people who can solve it are the leaders who are willing to do the hard part instead of buying another course and calling it done.