I was at lunch the other day and the bartender started venting about the parking system in the building. Apparently it’s run by an AI startup — a buddy of the building owner — and here’s what it does: it scans your plate when you pull in, it logs your entry time, it tracks your space, it monitors duration, it calculates what you owe, it processes the data, it generates a record, and it… never sends you a bill.
Eight steps. Eight whole steps of sophisticated data collection. And the one thing the system was actually built to do? It doesn’t do it.
I haven’t stopped thinking about that since.
The Most Expensive Way to Look Busy
Here’s why that story matters beyond bad parking software. That’s every AI pilot I’ve seen in the last two years.
Companies are deploying AI systems that do an impressive amount of work. They collect data. They process inputs. They run models. They generate outputs. Executives review dashboards. Teams attend status meetings. Everyone agrees the pilot is “progressing.”
But nobody asks the uncomfortable question: is it actually doing the thing we needed it to do?
Gartner’s prediction that 60% of AI projects will be abandoned by 2026 isn’t because the technology failed. It’s because the projects were never designed around an outcome. They were designed around activity.
And there’s a massive difference.
Activity Is Not Outcome
This is the part that gets me. I talk to executives all the time who can describe their AI pilot in incredible detail. They’ll walk me through the architecture, the data sources, the model they chose, the vendor they’re working with. They’ll tell me about the meetings, the steering committee, the quarterly reviews.
Then I ask: what business problem does this solve?
And I get a pause.
Not because they don’t have an answer. They do — they’ll say something about efficiency or insights or transformation. But when I push on what specific, measurable outcome they expected by now? That’s when it gets quiet.
The parking system doesn’t have a technology problem. It has a “nobody defined what done looks like” problem. And so do most AI pilots.
Why This Keeps Happening
I keep coming back to the 10/20/70 framework we use at Trust Insights. Ten percent of your success with AI comes from the algorithms. Twenty percent comes from the technology and infrastructure. Seventy percent comes from business process and people.
That parking system nailed the 10% — the algorithms clearly work, they’re scanning plates and calculating fees just fine. They probably have the 20% covered — the infrastructure is collecting and processing data. But the 70%? The actual business process that says “and then you send the bill”? Nobody built that part.
It’s the same pattern everywhere. Companies invest heavily in the AI and the technology, and then completely skip the business process work that actually delivers the outcome. Because the process work isn’t exciting. Nobody gets a standing ovation at a board meeting for saying “we mapped out how the output connects to a business action.” They get a standing ovation for saying “we deployed an AI model.”
The “Buddy” Problem
Here’s the other part of that parking story that sticks with me. The AI startup got the contract because the founder is a buddy of the building owner. Not because they had a track record. Not because they demonstrated they could solve the parking billing problem. Because they knew a guy.
This is happening in enterprise AI every single day. Vendor selection based on relationships, golf games, and conference cocktail parties. Not on whether the vendor can actually deliver the specific outcome you need.
I’m not saying relationships don’t matter — they do, and they should. But when the relationship IS the evaluation criteria, you end up with an eight-step system that doesn’t send bills. You end up with an AI pilot that looks impressive in a slide deck and delivers nothing to the business.
What “Done” Actually Looks Like
If you’re running an AI pilot right now, I want you to answer one question: what does this produce that someone in the business can act on?
Not “what does it process.” Not “what data does it analyze.” What does it produce? What’s the output that triggers a business action?
For the parking system, it should be simple: it produces a bill, the customer pays, the building makes money. That’s the outcome. Everything else is infrastructure supporting that outcome.
For your AI pilot, what’s the equivalent? If you can’t articulate it in one sentence, your pilot has the same problem as that parking garage. It’s collecting data and going nowhere.
Stop Celebrating the Eight Steps
The most frustrating part of all of this is that everyone involved in these pilots genuinely believes progress is being made. Because they’re measuring the wrong things. They’re measuring activity, not outcome.
Meetings held. Data processed. Models trained. Dashboards built. Check, check, check, check.
Bills sent? Revenue generated? Time saved on a specific process? Decision quality improved on a measurable metric?
Crickets.
This is a leadership problem, not a technology problem. The technology in that parking system works beautifully. It just doesn’t do the job. And leadership is so dazzled by the fact that AI is doing eight things that nobody noticed it isn’t doing the one thing that matters.
If your AI pilot can’t answer “what business outcome did this produce this quarter” with a specific number, it’s not a pilot. It’s a very expensive hobby.
And I say that as someone who genuinely wants these implementations to succeed. But they won’t succeed by accident. They’ll succeed because someone did the unsexy work of defining what done looks like before anyone wrote a line of code. That’s the job. And no amount of sophisticated data collection changes that.
Katie Robbert is CEO of Trust Insights. When she’s not helping companies figure out whether their AI actually does anything, she’s probably listening to bartenders explain technology better than most consultants do.