Category: AI & Leadership

  • Your AI Pilot Does Everything Except Its Job

    I was at lunch the other day and the bartender started venting about the parking system in the building. Apparently it’s run by an AI startup — a buddy of the building owner — and here’s what it does: it scans your plate when you pull in, it logs your entry time, it tracks your space, it monitors duration, it calculates what you owe, it processes the data, it generates a record, and it… never sends you a bill.

    Eight steps. Eight whole steps of sophisticated data collection. And the one thing the system was actually built to do? It doesn’t do it.

    I haven’t stopped thinking about that since.

    The Most Expensive Way to Look Busy

    Here’s why that story matters beyond bad parking software. That’s every AI pilot I’ve seen in the last two years.

    Companies are deploying AI systems that do an impressive amount of work. They collect data. They process inputs. They run models. They generate outputs. Executives review dashboards. Teams attend status meetings. Everyone agrees the pilot is “progressing.”

    But nobody asks the uncomfortable question: is it actually doing the thing we needed it to do?

    Gartner’s prediction that 60% of AI projects will be abandoned by 2026 isn’t because the technology failed. It’s because the projects were never designed around an outcome. They were designed around activity.

    And there’s a massive difference.

    Activity Is Not Outcome

    This is the part that gets me. I talk to executives all the time who can describe their AI pilot in incredible detail. They’ll walk me through the architecture, the data sources, the model they chose, the vendor they’re working with. They’ll tell me about the meetings, the steering committee, the quarterly reviews.

    Then I ask: what business problem does this solve?

    And I get a pause.

    Not because they don’t have an answer. They do — they’ll say something about efficiency or insights or transformation. But when I push on what specific, measurable outcome they expected by now? That’s when it gets quiet.

    The parking system doesn’t have a technology problem. It has a “nobody defined what done looks like” problem. And so do most AI pilots.

    Why This Keeps Happening

    I keep coming back to the 10/20/70 framework we use at Trust Insights. Ten percent of your success with AI comes from the algorithms. Twenty percent comes from the technology and infrastructure. Seventy percent comes from business process and people.

    That parking system nailed the 10% — the algorithms clearly work, they’re scanning plates and calculating fees just fine. They probably have the 20% covered — the infrastructure is collecting and processing data. But the 70%? The actual business process that says “and then you send the bill”? Nobody built that part.

    It’s the same pattern everywhere. Companies invest heavily in the AI and the technology, and then completely skip the business process work that actually delivers the outcome. Because the process work isn’t exciting. Nobody gets a standing ovation at a board meeting for saying “we mapped out how the output connects to a business action.” They get a standing ovation for saying “we deployed an AI model.”

    The “Buddy” Problem

    Here’s the other part of that parking story that sticks with me. The AI startup got the contract because the founder is a buddy of the building owner. Not because they had a track record. Not because they demonstrated they could solve the parking billing problem. Because they knew a guy.

    This is happening in enterprise AI every single day. Vendor selection based on relationships, golf games, and conference cocktail parties. Not on whether the vendor can actually deliver the specific outcome you need.

    I’m not saying relationships don’t matter — they do, and they should. But when the relationship IS the evaluation criteria, you end up with an eight-step system that doesn’t send bills. You end up with an AI pilot that looks impressive in a slide deck and delivers nothing to the business.

    What “Done” Actually Looks Like

    If you’re running an AI pilot right now, I want you to answer one question: what does this produce that someone in the business can act on?

    Not “what does it process.” Not “what data does it analyze.” What does it produce? What’s the output that triggers a business action?

    For the parking system, it should be simple: it produces a bill, the customer pays, the building makes money. That’s the outcome. Everything else is infrastructure supporting that outcome.

    For your AI pilot, what’s the equivalent? If you can’t articulate it in one sentence, your pilot has the same problem as that parking garage. It’s collecting data and going nowhere.

    Stop Celebrating the Eight Steps

    The most frustrating part of all of this is that everyone involved in these pilots genuinely believes progress is being made. Because they’re measuring the wrong things. They’re measuring activity, not outcome.

    Meetings held. Data processed. Models trained. Dashboards built. Check, check, check, check.

    Bills sent? Revenue generated? Time saved on a specific process? Decision quality improved on a measurable metric?

    Crickets.

    This is a leadership problem, not a technology problem. The technology in that parking system works beautifully. It just doesn’t do the job. And leadership is so dazzled by the fact that AI is doing eight things that nobody noticed it isn’t doing the one thing that matters.

    If your AI pilot can’t answer “what business outcome did this produce this quarter” with a specific number, it’s not a pilot. It’s a very expensive hobby.

    And I say that as someone who genuinely wants these implementations to succeed. But they won’t succeed by accident. They’ll succeed because someone did the unsexy work of defining what done looks like before anyone wrote a line of code. That’s the job. And no amount of sophisticated data collection changes that.

    Katie Robbert is CEO of Trust Insights. When she’s not helping companies figure out whether their AI actually does anything, she’s probably listening to bartenders explain technology better than most consultants do.

  • You Can’t Build AI on a Museum: The Infrastructure Crisis Leadership Refuses to Look At

    Here’s a number I want you to sit with: Gartner predicts that through 2026, organizations will abandon 60% of their AI projects. Not because the AI didn’t work. Because the data underneath it was unusable.

    Sixty percent. That’s not a failure rate — that’s a pattern. And it’s a pattern that should alarm every executive who just signed off on an AI budget without asking a single question about the data infrastructure it’s supposed to run on.

    The Electric Train Problem

    There’s a metaphor I keep coming back to when I talk to companies about this. They want the electric train — the cutting-edge AI tools, the automation, the intelligent agents that are going to transform their business. And they should want those things. The capabilities are real.

    But their infrastructure is a horse-drawn carriage.

    Their data lives in seventeen different systems that don’t talk to each other. Their CRM hasn’t been cleaned since 2019. Their marketing data is in one platform, their sales data is in another, and their customer service data is in a third, and nobody’s reconciled them. Half their analytics still run on spreadsheets that one person maintains and nobody else understands.

    And into this environment, they’re deploying AI and expecting it to deliver insights.

    Here’s what the maturity data actually shows: fewer than one in five organizations report high maturity in any aspect of data readiness. Only 4% — four percent — have high maturity in both data governance and AI governance together. The foundation that AI needs to work doesn’t exist in most organizations. It’s not that it needs a tune-up. It fundamentally isn’t there.

    And yet the AI budget got approved. The data infrastructure budget didn’t.

    Meanwhile, Your Employees Built Their Own Railroad

    Here’s where this gets worse. While leadership has been debating which AI platform to buy, evaluating vendors, running pilots, and building PowerPoint decks about their “AI roadmap” — the workforce didn’t wait.

    Seventy-five percent of workers are already using AI at work. And 78% of them brought their own tools to do it. Nearly half are accessing AI through personal accounts, completely bypassing every security control, every data governance policy, and every compliance framework the company has in place.

    This isn’t hypothetical risk. Ninety percent of IT leaders say they’re concerned about shadow AI from a privacy and security standpoint. And here’s the part that should keep you up at night: 80% have already experienced negative AI-related data incidents. Not “might experience.” Have experienced. Past tense.

    So while leadership was still working on the strategy, the data was already flowing through tools and accounts that nobody in IT even knows about. Your employees didn’t wait for the electric train. They built their own railroad. And it runs through your proprietary data, your customer information, and your intellectual property.

    This Isn’t a Technology Conversation

    I know exactly what happens next in most organizations when they hear these numbers. They buy something. A data quality tool. A governance platform. An AI security solution. Another vendor, another contract, another implementation timeline.

    And it won’t fix the problem. Because this isn’t a technology problem. It’s a leadership problem.

    The reason most organizations have garbage data infrastructure isn’t because the right tool doesn’t exist. It’s because nobody wanted to do the boring work. Data governance is not glamorous. Data cleanup is not exciting. Reconciling seventeen systems into a coherent architecture doesn’t make for a great board presentation.

    This is the 10/20/70 reality again. Ten percent algorithms, 20% technology, 70% business process and people. The infrastructure crisis isn’t in the 10% or the 20%. It’s in the 70% that nobody budgets for, nobody staffs for, and nobody wants to present at the all-hands because it doesn’t have a good demo.

    You know what does have a good demo? A generative AI tool running on clean data. The problem is, nobody wants to do the 12 months of invisible work that makes that demo possible.

    What the Foundation Actually Looks Like

    If this sounds familiar, it should. I wrote a few weeks ago about AI training programs failing because companies skip the assessment work. This is the same pattern, just applied to data instead of people.

    The fix isn’t complicated to understand. It’s just hard to do.

    Audit before you buy. Before you sign another AI contract, figure out what data you actually have, where it lives, who owns it, and whether it’s usable. This is the equivalent of getting a building inspection before you renovate. Almost nobody does it because they don’t want to hear the answer.

    Govern before you glamour. You need data governance that people will actually follow — not a 60-page policy document that lives in SharePoint and nobody reads. Governance that works is governance that’s built into the workflow, not bolted on after the fact. And it has to address the shadow AI problem directly, because your people are already using tools you don’t know about.

    Accept the timeline nobody wants to hear. Most organizations need 6 to 12 months of infrastructure work before their AI investments will actually pay off. Nobody wants to hear that. Every vendor in the market is telling you that you can deploy AI in weeks. And you can — you can deploy AI in weeks. You just can’t deploy AI that works on data that doesn’t exist.

    Stop treating data readiness as IT’s problem. This is a business problem. It requires business decisions about what data matters, how it should be structured, and who’s responsible for maintaining it. When data readiness gets delegated entirely to IT, it becomes a technical project. It needs to be a strategic priority with executive sponsorship and business ownership.

    The Museum Metaphor

    You can put a touchscreen in a museum. You can add digital signage and interactive displays and a really nice app. And when visitors use the app, it’ll work. But the building is still a museum. The infrastructure is still old. The plumbing still leaks. The electrical can’t support the load.

    That’s what most companies have done with AI. They’ve put a touchscreen on a museum and called it a digital transformation.

    The organizations that will actually win with AI — not in 2026, but in 2028 and beyond — aren’t the ones buying the most tools right now. They’re the ones doing the foundation work that nobody wants to talk about at conferences because it’s boring and it takes a long time and you can’t put it in a press release.

    They’re the ones whose leadership looked at the shiny AI demos and said, “Great. Now show me the data architecture.” And when the answer was unsatisfying, they had the discipline to fix the foundation before building on top of it.

    That’s not a technology decision. That’s a leadership decision. And right now, most leaders are choosing the demo over the foundation. They’ll figure out the cost of that choice in about two years — when 60% of their AI projects are abandoned, and they’re wondering what went wrong.

    The data was never ready. Nobody wanted to look.

  • AI Ate the Proving Ground: The Leadership Crisis Nobody Sees Coming

    Everyone’s talking about AI killing the billable hour. Outcome-based pricing, gain share models, managed services — it’s all over LinkedIn, and it’s all valid. But it’s also the wrong conversation.

    The billable hour wasn’t just a revenue model. It was the training system. And that’s the part nobody’s mourning.

    Here’s What I Mean

    For decades, the way you became a senior consultant — or a senior anything in knowledge work — was by doing the work. The tedious, repetitive, detail-heavy work. You pulled the data. You built the decks. You sat through 47 stakeholder interviews and watched how the senior partner navigated the room when things got tense.

    You didn’t learn judgment from a training program. You learned it from reps. Thousands of hours of reps.

    AI just automated the reps.

    McKinsey’s internal tool, Lilli, can now do roughly 80% of what a junior analyst used to do — scanning documents, drafting summaries, building slides. BCG’s tool polishes presentations automatically. Deloitte is using GPT-based tools to write proposals. And the firms are responding exactly the way you’d expect: PwC cut graduate hiring by 30%. The Big Four have frozen entry-level salaries for three consecutive years. AI roles now outnumber entry-level consultant positions.

    The consulting pyramid — wide base of juniors, narrow top of partners — is becoming what researchers are calling “the obelisk.” Tall, narrow, fewer people at the bottom.

    And if you’re looking at that and thinking “good, efficiency,” I’d ask you to zoom out.

    The Three-Sided Squeeze

    Because the junior pipeline drying up isn’t happening in isolation. It’s happening at the same time as two other things:

    Clients are poaching the seniors. The in-housing trend is well-documented at this point. Companies bring in consultants for the initial build, learn the playbook, hire their own people, and cut the cord. The consultants who do have experience and judgment are getting absorbed by the client side. Consulting firms are being forced into “capacity building” roles — basically training their replacements — just to keep the engagement going.

    The market doesn’t want generalists anymore. Budget authority for AI is shifting from the CIO to line-of-business leaders — your CMOs, your COOs — and they want vertical expertise tied to measurable outcomes. The well-rounded generalist consultant, which is exactly the type of person the old training model was designed to produce, is exactly what the market is no longer buying.

    So you’ve got AI replacing the bottom, clients poaching the top, and a market that doesn’t want what the middle produces. That’s not a pricing problem. That’s a “who’s going to be left to run these firms in 2030?” problem.

    “We’ll Figure It Out” Is Not a Strategy

    Here’s what bothers me most about this. The response from most firms, based on every piece of research I’ve read, is essentially: hand the juniors some AI tools and hope they pick up higher-order skills on their own.

    That’s not a plan. That’s wishful thinking dressed up in a technology budget.

    The proving ground — the place where future leaders actually developed the instincts and judgment to lead — was the work itself. The grind was the training. When you remove the grind without replacing it with something intentional, you don’t get more efficient leaders. You get people who can operate AI tools but have never had to develop the critical thinking that makes those tools useful.

    Think about it this way: if you’ve never had to manually pull apart a dataset to understand why the numbers don’t add up, how do you know when the AI’s output is wrong? If you’ve never sat in a room where a client pushed back hard on your recommendation, how do you develop the instinct to read the room? These aren’t skills you learn from a tutorial. They come from doing the work, making mistakes, and having someone more experienced show you where you went sideways.

    This Isn’t Just a Consulting Problem

    And here’s why this matters even if you’ve never hired a consultant in your life.

    Every knowledge-work industry has its own version of this proving ground. Law firms had junior associates who learned by reviewing thousands of documents. Marketing teams had analysts who built intuition by manually pulling reports week after week. Finance had junior traders who learned market feel through repetitive modeling.

    AI automated the repetitive work in all of these fields. And the efficiency gains are real — I’m not arguing they aren’t. But the question nobody’s asking is: where does the next generation of CMOs, managing directors, and senior partners come from if the work that trained them doesn’t exist anymore?

    This is the 10/20/70 problem. Research consistently shows that successful transformation is 10% algorithms, 20% technology, and 70% business process and people. We’re spending all our energy on the 10% and 20%, and almost none on the 70% that actually determines whether any of this works.

    So What Do We Do About It?

    I’m not going to pretend I have the whole answer, but I do know what the answer isn’t: generic AI literacy programs. Slapping a “prompt engineering for beginners” course on your LMS and calling it workforce development is not going to close this gap. I’ve seen too many of those programs, and they produce people who can write a decent prompt but still can’t think critically about the output.

    What has to change is that the training model needs to become intentional instead of incidental. The old system worked because the learning was baked into the work — you didn’t have to design it, it just happened. That luxury is gone. Now somebody has to actually design the experiences that develop judgment, critical thinking, and strategic instincts. And that “somebody” is leadership.

    A few things I think need to happen:

    Redefine what “entry-level” means. The new junior role isn’t “do the research.” It’s “evaluate whether the AI’s research is trustworthy and figure out what’s missing.” That’s a fundamentally different skill set, and most organizations aren’t hiring for it or developing it.

    Build structured learning into the work, on purpose. If the reps aren’t happening organically anymore, you have to create them. That means rotational programs, case-based learning, and — critically — pairing junior people with experienced practitioners who can show them why the AI’s answer is only 60% of the story.

    Stop treating the people problem as a technology problem. New tools don’t fix a broken development pipeline. If your answer to “how do we train the next generation of leaders?” starts with a software purchase, you’re solving the wrong problem.

    The Uncomfortable Truth

    The consulting industry — and every knowledge-work industry behind it — is about to find out what happens when you optimize for efficiency without thinking about development. The gains are real today. The leadership vacuum shows up in five years.

    And by the time you notice it, the people who could have filled those roles will have gone somewhere else, built their own thing, or never developed the skills in the first place.

    Nobody held a funeral for the billable hour. Maybe they should have. Not because the billing model mattered, but because nobody stopped to think about what else we were burying along with it.

  • Your AI Training Program Is Failing Because You Skipped the Hard Part

    Raise your hand if your company bought an AI training program in the last 18 months. Now keep your hand up if anyone actually changed how they work because of it.

    Yeah. That’s what I thought.

    Here’s what happened: somebody in leadership — maybe the CEO, maybe the CTO, maybe an enthusiastic VP who went to a conference — came back and said “we need AI training.” So the company bought a platform, rolled out a course, tracked who completed it, and moved on. Box checked. Budget spent. Progress declared.

    Except nobody’s behavior changed. And now you’re wondering why.

    The Pattern I Keep Seeing

    I’ve had this conversation with enough companies at this point that I can almost predict the arc. It starts with the announcement — “we’re investing in AI readiness.” There’s usually an all-hands or at least an email. Then comes the LMS module: intro to generative AI, what is a prompt, here are some use cases. Maybe a “prompt engineering for beginners” course if they’re feeling ambitious.

    Everyone completes it. Some people actually enjoy it. And then… nothing. The people who were already experimenting with AI are annoyed because they sat through content they passed six months ago. The people who are genuinely struggling are even more lost because the course moved too fast. And the people in the middle got a certificate they’ll never think about again.

    Here’s the number that should bother you: less than 3% of the workforce are what researchers would call actual AI practitioners — people who use AI in their workflows in ways that produce measurable productivity gains. The rest are spread across a spectrum from “haven’t touched it” to “I use ChatGPT sometimes.” And most training programs treat that entire spectrum as one audience.

    That’s not training. That’s a content delivery exercise.

    I’m Saying This as Someone Who Did It Too

    I want to be transparent here: we did this at Trust Insights. We built the 101 content. We created the introductory courses. And at the time, it was the right thing to do — people needed a starting point, and that foundational content served a real purpose.

    But we’ve moved past it because the market has moved past it. The question is no longer “what is generative AI and should I care?” Most people have answered that for themselves, one way or another. The question now is: “I’m at point A with this technology, and I need to get to point B — and those two points are different for every single person in my organization.”

    That’s a fundamentally different problem, and a generic course doesn’t solve it.

    This Is a Leadership Problem Disguised as a Training Problem

    Here’s where it gets uncomfortable. The reason most companies default to one-size-fits-all training is because the alternative requires actual leadership work that nobody wants to do.

    To train people effectively, you first have to understand where they are. Not as a company. As individuals. You need to know who’s still at basic prompt-and-response chat, who’s already building custom AI agents, who’s operating agentic systems, and who hasn’t opened the tool at all. Those are different people with different needs, and the gap between them is enormous.

    We developed a framework for this — six levels of generative AI proficiency, ranging from basic chat usage all the way through engineering custom agentic systems. And one of the most important things about it is the note at the bottom: higher isn’t inherently better. The goal isn’t to get everyone to level six. The goal is to match the proficiency level to the role.

    An individual contributor who uses Deep Research and NotebookLM effectively is exactly where they need to be. A director who’s still at level one has a different problem — and it’s not a training problem. It’s a judgment and delegation gap that no course is going to fix.

    But here’s the thing: you can’t make any of those distinctions if you haven’t done the assessment work first. And assessment is harder than buying a course. It requires understanding both the technology and your people. It requires conversations, not just completion rates.

    This is the 10/20/70 reality. Successful transformation is roughly 10% algorithms, 20% technology, and 70% business process and people. Most AI training budgets are 100% allocated to the 10% and 20%, and exactly 0% allocated to the 70% that determines whether any of it sticks.

    What Actually Works (And What We’re Doing Now)

    The shift we’ve made — and what I’d recommend to any organization — is from curriculum delivery to proficiency-mapped development. That means a few things in practice:

    Assess before you train. Figure out where your people actually are before you decide what they need. This sounds obvious and almost nobody does it. They skip straight to the content because the content is the easy part.

    Stop running everyone through the same thing. The person who needs to understand what AI is and the person who needs to learn how to build multi-step automations are not in the same class. Stop pretending they are.

    Personalize through practice, not just content. This is why we shifted to workshops. You can’t develop AI judgment by watching someone else use AI. You have to use it yourself, on your own problems, with someone experienced enough to tell you where your thinking went wrong. That’s the model that actually changes behavior.

    Match the investment to the role. Not everyone needs the same depth. What matters is that each person is proficient enough for what their job actually requires — and that leadership can articulate what “proficient enough” looks like for each role.

    Stop Measuring Completion. Start Measuring Capability.

    This is the part that most training programs get fundamentally wrong, and it drives me crazy. The metric everyone tracks is completion. Did they finish the course? Did they pass the quiz? Did they get the certificate?

    None of that tells you whether they can do anything differently on Monday morning.

    The real question is capability: can this person apply AI in a way that makes their work better, faster, or more accurate? Can they evaluate an AI output and know when it’s wrong? Can they identify where AI fits in their workflow and where it doesn’t?

    If you can’t answer those questions for your team, you don’t have a training problem. You have a leadership problem. Because it means nobody has defined what proficiency looks like for each role, and until somebody does, no training program — no matter how good the content is — is going to move the needle.

    The Hard Part Is the Whole Point

    Look, generic training programs aren’t evil. Some of them are genuinely good. The introductory content we built was good — it helped people get started, and I don’t regret building it.

    But “getting started” was 2023’s problem. We’re past that now. The organizations that are actually seeing results from AI aren’t the ones with the best training platform. They’re the ones where leadership did the unglamorous work of understanding where each person is, defining where they need to be, and building intentional pathways between those two points.

    That’s not a vendor problem. That’s not a budget problem. That’s a leadership problem. And the only people who can solve it are the leaders who are willing to do the hard part instead of buying another course and calling it done.