Author: Katie Robbert

  • Which Change Management Framework Should You Use?


    If you’re about to lead a change initiative — an AI adoption project, a process overhaul, a digital transformation — you’ve probably been told to “pick a framework.” And then you find yourself staring at five or six options, each with its own acronym, its own diagram, and its own book deal, wondering which one is the right fit.

    I’ve spent the last several weeks writing a detailed comparison of the five most popular change management frameworks and how they stack up against the 5P Framework by Trust Insights. I’m biased — I built the 5P Framework — but I’ve tried to be honest about what each model does well and where it falls short. Here’s the decision guide version.

    When to Use ADKAR

    Use ADKAR when your primary challenge is individual adoption. If the change depends on each person understanding, wanting, and being able to do something new, ADKAR gives you a clear psychological sequence to work through. It’s especially strong for technology rollouts where you need people to actually use the new tool — not just know it exists. I wrote more about what ADKAR gets right (and misses) here.

    When to Use Kotter’s 8 Steps

    Use Kotter when you need organizational momentum. If the change requires executive buy-in, cross-functional alignment, and visible early wins to build credibility, Kotter gives you a playbook for moving an entire organization. It’s best suited for large-scale transformations where the biggest risk is inertia. Here’s where it stops short.

    When to Use Lewin’s Model

    Use Lewin when you need a simple mental model for understanding resistance. Unfreeze-Change-Refreeze is useful as a conceptual lens, especially for cultural or behavioral shifts. But — and this is a big but — you’ll need to pair it with something more operational for actual project execution. Three phases isn’t enough on its own.

    When to Use McKinsey 7-S

    Use McKinsey 7-S when you need a diagnostic tool. If you suspect organizational misalignment is the root cause of your problems, 7-S gives you a structured way to assess seven interconnected dimensions. It’s a snapshot, not a roadmap — so you’ll need an action framework to follow up. Here’s why diagnosis alone isn’t enough.

    When to Use Bridges’ Transition Model

    Use Bridges when you need to focus on the emotional experience of change. If your team is going through grief, uncertainty, or fear about the future, Bridges gives you language and structure for supporting them through the psychological transition. It’s the most human model on this list. But empathy needs structure to produce results.

    When to Use the 5P Framework

    Use the 5P Framework when you need all of the above to actually lead somewhere.

    The 5P Framework isn’t competing with these models — it’s the structural wrapper that makes them measurable. Start with Purpose: the measurable question your change is trying to answer. Then address People (use ADKAR or Bridges here if you need individual-level or emotional support). Define your Process (use Kotter here if you need organizational momentum). Select the right Platform. And close with Performance — measuring whether you actually achieved what you set out to do.

    The best approach isn’t to pick one framework and hope it covers everything. It’s to start and end with measurement, and use the right tools for the middle.

    The Moral of the Story

    Stop asking “which framework should I use?” and start asking “what measurable question am I trying to answer?” Once you have that, pick the tools that serve each layer of the work — and bookend the whole thing with Purpose and Performance so you know whether it actually worked.

    For the full side-by-side comparison with the detailed strengths-and-gaps analysis: The 5P Framework vs. Other Change Management Models.

    Ready to put the 5P Framework to work? Start with Change Management, AI Strategy, or Getting Started — whichever matches where you are right now.

  • The Two Questions Every Change Project Should Start and End With


    I’ve watched dozens of change management projects follow a framework to the letter and still end up in the same place: no one can tell you whether it actually worked.

    Not because people didn’t try. Not because the framework was bad. Because no one asked the two questions that matter most.

    Question One: “What Are We Trying to Answer?”

    Before you build a coalition, before you create urgency, before you unfreeze anything — you need a measurable question. Not a goal statement. Not a vision. Not something that sounds good in a town hall. A question that has a specific, measurable answer.

    “How do we increase customer retention by 15% over the next two quarters?” is a measurable question. “Improve the customer experience” is not. The first one tells you exactly what to measure and when to measure it. The second one could mean literally anything, and six months from now everyone will claim success based on whatever metric makes them look good.

    The difference between these two starting points determines whether you’ll be able to evaluate the project at all.

    And here’s what gets me: most change management frameworks skip this step entirely. ADKAR starts with Awareness. Kotter starts with Urgency. Lewin starts with Unfreeze. Bridges starts with Ending. All valid starting points for their respective purposes. But none of them require you to define what success looks like in measurable terms before you begin. They assume someone has already done that. And in my experience? Someone usually hasn’t.

    Question Two: “Did We Answer It?”

    This is the question that almost never gets asked. Projects end. The new process gets adopted. The tool gets rolled out. Everyone moves on to the next fire. And nobody circles back to check whether the original question got answered.

    Kotter’s final step is “Institute Change” — embedding the change into culture. ADKAR ends with Reinforcement — making the change stick. Both important. Neither one is the same as measuring whether the change produced the outcome you set out to achieve.

    You can reinforce a change that didn’t work. You can institutionalize a process that doesn’t solve the problem. “It stuck” and “it worked” are two very different things.

    I’ve lived this. I’ve been in the room at the end of a major initiative where everyone is congratulating themselves and I’m the one asking “but did we hit the number?” And the room goes quiet. Not because they don’t care — because they genuinely don’t have an answer. The project didn’t fail. It just never had a way to prove it succeeded.

    The Framework That Forces Both Questions

    The 5P Framework by Trust Insights was built to close this gap. It starts with Purpose — your measurable question — and ends with Performance — the answer to that question. Everything in between (People, Process, Platform) serves the Purpose and gets validated by Performance.

    Two questions. That’s the difference between a project that “went well” and a project that actually worked.

    The Moral of the Story

    Every framework has value. But no framework works if you don’t know what you’re trying to achieve and you don’t check whether you achieved it. Start with a measurable question. End with a measurable answer. Everything else is the work in between.

    For the full side-by-side comparison of how five major models handle (or don’t handle) these two questions: The 5P Framework vs. Other Change Management Models.

    Ready to put the 5P Framework to work? Start with Change Management or go to Getting Started for the full walkthrough.

  • Kotter’s 8 Steps Are Missing a 9th

    It’s the most recognized change management playbook in the corporate world. But recognition doesn’t mean it’s finished.


    Kotter’s 8-Step Process for Leading Change is probably the most recognizable change management framework in the corporate world. Urgency, coalition, vision, communication, empowerment, short-term wins, consolidation, institutionalization. It’s comprehensive, it’s sequential, and it’s been taught in business schools for decades. If you’ve ever sat through a leadership offsite about “driving transformation,” someone had Kotter on a slide.

    And I have a lot of respect for this model. It gets something right that many others don’t: organizational momentum. The emphasis on building a guiding coalition and generating short-term wins is practical and powerful. If you’ve ever tried to push through a transformation without executive buy-in or visible early results, you know exactly why those steps matter. People need to see proof that the change is real and that leadership is behind it. Kotter understood that.

    He also understood that change is a leadership problem, not just a management problem. The distinction between “create urgency” and “write a project plan” is enormous. Kotter is telling you to move people emotionally before you move them operationally. That’s smart.

    But There’s a Structural Problem

    Here’s the thing. Kotter’s model starts with urgency. Not with a clearly defined, measurable goal. Urgency is about emotional energy — getting people to feel that change is necessary. That’s useful, but it’s not the same as stating what specific outcome you’re trying to achieve and how you’ll know if you got there.

    I’ve seen this play out more times than I can count. Leadership team gets fired up. They rally the troops. They build a coalition. They communicate a vision. And the vision is something like “become an AI-first organization” or “transform the customer experience.” Which sounds great in a town hall. But when you ask “how will we know if this worked?” the room goes quiet.

    That’s not a Kotter problem, exactly — it’s a problem with how organizations use Kotter. But a framework should make it hard to skip the critical step, not easy.

    And then there’s the ending. The final step — “Institute Change” — is about embedding new behaviors into the culture. That’s important. But it’s not the same as measuring whether the change produced the result you originally set out to achieve. You can successfully embed a new process into your culture and still have no idea whether it actually moved the needle on the business problem you were trying to solve.

    You institutionalized a change. Great. Did it work? (Crickets.)

    The Missing 9th Step

    What Kotter’s model needs is a bookend on each side. A measurable purpose at the start and a performance review at the end. That’s exactly what the 5P Framework by Trust Insights provides.

    Purpose asks: “What specific, measurable question are we trying to answer?” Not a vision statement. Not an aspiration. A question with a number attached to it. Performance asks: “Did we answer it?” Not “did people adopt it?” Not “did it stick?” — “Did it actually produce the outcome we said it would?”

    So here’s what I actually recommend: you can run Kotter’s 8 steps inside the 5P Framework. Use Kotter for organizational alignment, coalition building, and momentum — it’s excellent for that. But start with Purpose so you know what success looks like before you create urgency around it. And end with Performance so you can actually prove the initiative was worth the disruption.

    Eight steps is a lot to execute. A lot of meetings. A lot of energy. A lot of political capital spent. You deserve to know at the end whether they worked.

    The Moral of the Story

    Kotter built a great playbook for moving an organization through change. But he assumed someone had already defined what success looked like, and he didn’t build in a step for checking whether you got there. The 5P Framework adds those bookends: start with a measurable question, end with a measurable answer.

    If you want the full side-by-side breakdown of how Kotter stacks up against ADKAR, Lewin, McKinsey 7-S, and Bridges, I wrote the whole comparison: The 5P Framework vs. Other Change Management Models.

    Ready to see how the 5P Framework applies to change management specifically? Start here.

  • Lewin’s Model Is 80 Years Old. Here’s What It’s Missing.


    Kurt Lewin’s Unfreeze-Change-Refreeze model is the oldest formal change management framework still in active use. Published in the 1940s, it introduced a deceptively simple idea: before you can change a system, you have to destabilize it. Then you make the change. Then you stabilize the new state.

    It’s elegant. It’s intuitive. And for certain types of change — especially cultural or behavioral shifts — the metaphor still holds up. The concept of “unfreezing” is a genuinely useful lens for understanding organizational resistance. People aren’t resisting change because they’re difficult. They’re frozen in patterns that have worked for them until now. You have to acknowledge that before you can move them.

    I use this metaphor all the time when I’m explaining to leadership teams why their shiny new initiative isn’t getting traction. “You haven’t unfrozen anything. You just dropped a new tool on people and expected them to reorganize their entire workflow overnight.” That’s a Lewin insight, and it’s a good one.

    But Simplicity Has a Cost

    Here’s the thing. Lewin’s model is a three-phase conceptual framework, not an operational one. It tells you the shape of change but gives you almost nothing to work with in terms of execution.

    There’s no guidance on how to assess your people’s skills, willingness, or psychological safety. There’s no process for documenting repeatable workflows. There’s no mechanism for selecting the right tools or platforms. And there’s no way to measure whether the change actually achieved what it was supposed to.

    “Refreeze” means the new state has stabilized. It doesn’t mean the new state is better. You can successfully refreeze around a process that doesn’t solve the problem you started with. You can stabilize into something that’s just as broken as what you had before — it’s just a different kind of broken. And Lewin’s model gives you no way to tell the difference.

    I also have a practical objection: in 2026, does anything ever really “refreeze”? The pace of change in most organizations means you’re unfreezing again before the last change fully solidified. Lewin was writing in the 1940s. The idea that you could reach a stable new state and stay there for a meaningful period of time was a lot more reasonable then than it is now.

    What 80 Years of Practice Has Taught Us

    Lewin was ahead of his time. But the world has gotten more complex, and change management needs more structure than three phases can provide. The 5P Framework by Trust Insights builds on the same intuition — that change needs to be deliberate and phased — but adds the specificity that Lewin’s model lacks.

    Purpose defines what you’re changing and why, in measurable terms. People addresses the skills, willingness, and psychological safety of your team (this is where the “unfreeze” work actually lives). Process maps the workflows that need to change. Platform ensures you’re choosing the right tools. And Performance closes the loop by measuring whether the change actually worked — not just whether it stabilized.

    So here’s my recommendation: use Lewin’s Unfreeze-Change-Refreeze as a mental model for the emotional arc of transformation. It’s a good lens. But pair it with the 5P Framework so you have operational structure, clear measurement, and a way to prove the investment was worth it.

    Because “we successfully refroze” is not the same thing as “it worked.” And after 80 years, we should probably expect a framework to tell us the difference.

    The Moral of the Story

    Lewin gave us a foundational metaphor. It’s still useful. But a metaphor isn’t an operating system. If you need to actually manage a change initiative — with people, processes, tools, and measurable outcomes — you need more than three phases and an ice cube analogy.

    For the full comparison of how Lewin stacks up against ADKAR, Kotter, McKinsey 7-S, and Bridges: The 5P Framework vs. Other Change Management Models.

    Want to see what a complete change management framework looks like? Start with the 5P Framework for Change Management.

  • McKinsey 7-S Tells You What to Look At — Not What to Do


    The McKinsey 7-S Framework is one of those models that looks impressively comprehensive on a slide deck. Strategy, Structure, Systems, Shared Values, Skills, Style, Staff. Seven interconnected elements that all need to be aligned for an organization to function effectively.

    And it’s right. Organizational alignment does matter. If your strategy says one thing and your systems do another, you’ve got a problem. The 7-S model is genuinely useful as a diagnostic tool — a way to assess where misalignment exists within an organization. I’ve used it in exactly that context: “let’s map these seven dimensions and see where the cracks are.” It works for that.

    It’s also the framework that most explicitly addresses organizational culture (Shared Values, Style) as a factor in change. A lot of models ignore culture entirely. McKinsey at least puts it on the diagram.

    The Limitation Nobody Mentions

    Here’s the thing. McKinsey 7-S is a snapshot, not a roadmap. It tells you what to examine, but it doesn’t tell you what to do once you’ve found the misalignment. There’s no sequence. There’s no prioritization. There’s no process for deciding which element to address first or how to manage the change from state A to state B.

    I’ve sat in rooms where teams have done a thorough 7-S analysis. They’ve mapped all seven dimensions. They’ve identified the misalignments. And then someone says, “Okay, so… now what?” And the room goes quiet. Because the framework gave them a diagnosis with no treatment plan.

    It also doesn’t start with a measurable purpose. You’re assessing alignment against… what, exactly? If you haven’t defined what success looks like before you start analyzing seven organizational dimensions, you’re generating a lot of insight with no clear direction for what to do with it. You end up with a really thorough slide deck and no next step.

    And like every other major framework I’ve reviewed, it doesn’t close the loop. There’s no built-in mechanism for measuring whether your realignment efforts actually produced the outcome you were after. You can realign all seven S’s and still not know if the business problem got solved.

    From Diagnosis to Action

    The 5P Framework by Trust Insights isn’t competing with McKinsey 7-S — it’s completing it. Use 7-S to diagnose organizational alignment. Then use the 5P Framework to act on what you find.

    Start with Purpose: define the measurable question your realignment is trying to answer. Then map the 7-S insights to the 5P structure. Skills, Staff, and Style map to People. Systems and Structure map to Process. The tools and technology that support your Strategy become Platform. And Performance measures whether the realignment actually produced the outcome you defined in Purpose.

    Now you’ve got something. A diagnosis and a treatment plan. A before and an after. A question and an answer.

    The Moral of the Story

    Diagnosis is valuable. I’m not dismissing it. But without a framework for action and measurement, it stays on a slide deck instead of driving real change. McKinsey 7-S tells you where you’re misaligned. The 5P Framework tells you what to do about it and how to know if it worked.

    For the full side-by-side comparison: The 5P Framework vs. Other Change Management Models.

    Want to move from diagnosis to action? The 5P Framework for Change Management is where to start.

  • Bridges’ Transition Model Focuses on Feelings — Not Results


    William Bridges’ Transition Model is different from most change management frameworks, and I mean that as a genuine compliment. While everyone else is focused on the operational mechanics of change — steps, phases, alignment diagrams — Bridges focuses on the psychological experience. Ending, Neutral Zone, New Beginning. It’s about what people go through internally when the world around them shifts.

    If you know me, you know I believe people come first. Always. So a model that centers the human emotional experience of change? I’m here for it.

    The Neutral Zone, in particular, is an underappreciated concept. That messy, uncertain period where the old way is gone but the new way hasn’t fully taken hold. Most project plans pretend this phase doesn’t exist. They go from “launch new system” to “adoption complete” with nothing in between. Bridges names the thing that every person who’s ever lived through a reorg already knows: there’s a disorienting middle where nobody knows what the rules are anymore. Acknowledging that — planning for it — is valuable.

    Where It Falls Short

    Here’s the thing. Bridges’ model is entirely focused on the emotional journey. It doesn’t address skills. It doesn’t address tools. It doesn’t address processes or workflows. It doesn’t start with a measurable purpose, and it doesn’t end with any way to evaluate whether the transition led to a better outcome.

    “New Beginning” means people have psychologically accepted the change. It doesn’t mean the change worked. Your team could move through Ending, Neutral Zone, and New Beginning with flying colors and still be operating a process that doesn’t solve the original business problem. Everyone feels great about the transition. The numbers haven’t moved. Now what?

    Bridges also doesn’t give you any operational structure. If you’re managing a real project — with timelines, deliverables, technology decisions, and stakeholder reporting — you need more than an emotional roadmap. You need to know who’s responsible for what, what the workflow looks like, which tools you’re using, and how you’ll measure success. Bridges doesn’t touch any of that.

    I’ve seen well-meaning leaders invest heavily in supporting their teams through the emotional transition and completely neglect to document the new process or measure whether the change delivered results. The team felt cared for. The project had no measurable outcome. That’s not a win — it’s a missed opportunity.

    Combining Emotional Intelligence with Operational Structure

    The 5P Framework by Trust Insights and Bridges’ Transition Model are natural complements. Bridges gives you a lens for the People layer of the 5P Framework — understanding where your team is emotionally and what support they need to move forward. That’s the “unfreeze” work, the “Neutral Zone” management, the emotional intelligence piece. It’s real and it matters.

    But the 5P Framework wraps that emotional intelligence in operational structure. Purpose defines what you’re trying to achieve in measurable terms — before you start managing anyone’s feelings about the change. Process maps the workflows. Platform selects the right tools. And Performance measures whether the whole effort — emotional transition included — actually delivered results.

    Use Bridges for the People layer. Use 5P for everything else. Now you’re managing the whole picture: how people feel and whether the project worked.

    The Moral of the Story

    Feelings matter. But so do outcomes. The best change management approach addresses both. Bridges gives you the empathy. The 5P Framework gives you the structure, the measurement, and the proof.

    For the full side-by-side comparison of all five models: The 5P Framework vs. Other Change Management Models.

    Ready to see what operational structure plus emotional intelligence looks like? The 5P Framework for Change Management.

  • What ADKAR Gets Right — and the One Thing It Misses

    It’s one of the most popular change management models for a reason. But popularity doesn’t mean it’s complete.


    If you’ve ever been part of a change management initiative — and if you’re reading this, you probably have — there’s a good chance someone pulled out the ADKAR model. Awareness, Desire, Knowledge, Ability, Reinforcement. It’s been the go-to framework in corporate change management for years, and I get why.

    I’ve used ADKAR. I’ve recommended ADKAR. And I think it does something genuinely important that a lot of other models skip over entirely: it addresses willingness. The “Desire” step forces you to acknowledge that people don’t just need to understand the change — they need to actually want to participate in it. That’s not a small thing. Most models assume that if you explain the change clearly enough, people will just get on board. Anyone who has ever managed a team knows that’s not how humans work.

    ADKAR also gives you a diagnostic. If someone isn’t adopting the change, you can pinpoint exactly where they’re stuck — is it Awareness? Desire? Knowledge? — and address that specific barrier. That’s practical. That’s useful. I’m not here to trash it.

    So What’s Missing?

    Here’s the thing. ADKAR starts with Awareness. That means it assumes someone has already figured out what the change is and why it matters. But in practice? That step gets skipped more often than you’d think.

    Teams jump into building awareness for a change initiative that was never clearly defined in measurable terms. They can tell you they’re “adopting AI” or “improving customer experience,” but they can’t tell you what specific, measurable question the project is trying to answer. “We’re rolling out a new CRM” is not a purpose. “What is the impact of our CRM migration on sales cycle length?” — that’s a purpose.

    And it doesn’t just have a gap at the beginning. It has one at the end, too.

    Reinforcement — the final step — is about making the change stick. It’s about sustaining adoption. That’s valuable. But it’s not the same thing as measuring whether the change actually produced the outcome you set out to achieve. You can reinforce a change that didn’t work. You can sustain adoption of a tool that isn’t solving the problem it was supposed to solve. Reinforcement asks “did people stick with it?” It doesn’t ask “did it actually work?”

    That’s two missing bookends: no measurable purpose at the start, and no performance measurement at the end.

    How the 5P Framework Fills This Gap

    This is where I’m biased, and I’m not even humble about it. The 5P Framework by Trust Insights — which I built — bookends the entire process with Purpose and Performance. Purpose forces you to state a measurable question before you do anything else. Performance forces you to go back and answer it. Everything in between — People, Process, Platform — serves the Purpose and gets validated by Performance.

    So here’s what I actually recommend: you don’t have to abandon ADKAR to use the 5P Framework. Use them together. ADKAR is excellent for the People layer — it gives you a diagnostic for individual adoption that the 5P Framework doesn’t try to replicate at that level of detail. But wrap it in 5P so that the work you’re doing has a defined starting point and a measurable finish line.

    Start with Purpose: what measurable question are we trying to answer with this change? Then use ADKAR to get your People through the adoption curve. Document your Process. Select your Platform. And close with Performance: did we answer the question we started with?

    Because if you can’t answer “did this actually work?” at the end, the framework didn’t fail. The project just never had a way to succeed.

    The Moral of the Story

    ADKAR is a good model. It earned its place. But it was designed to manage individual adoption, not to structure an entire change initiative from purpose to proof. If you’re using it as your only framework, you’re missing the two questions that matter most: “What are we actually trying to achieve?” and “Did we achieve it?”

    The 5P Framework gives you both. And if you want to see how ADKAR stacks up against Kotter, Lewin, McKinsey 7-S, and Bridges in a side-by-side comparison, I wrote the whole thing out: The 5P Framework vs. Other Change Management Models.

    Want to see where your organization stands? Start with the 5P Framework for Change Management — it’ll walk you through exactly where to begin.

  • The 5P Framework vs. Other Change Management Models


    This article was originally published on the Trust Insights blog. It is republished here for the katierobbert.com audience.


    There is no shortage of change management models. A quick search returns dozens of them, each with its own acronym, diagram, and book deal. The most popular ones — ADKAR, Kotter’s 8 Steps, Lewin’s Unfreeze-Change-Refreeze — have been taught in business schools and deployed in enterprises for decades. They’re well-researched, widely adopted, and genuinely useful.

    So why did we build the 5P Framework?

    Not because those models are wrong. They aren’t. But in our experience working with organizations navigating digital transformation and AI adoption, we kept running into the same pattern: teams would pick a framework, follow it faithfully, and still end up with a project that had no measurable purpose and no way to tell if it worked.

    The problem wasn’t the change management — it was what was missing from it. Most frameworks assume you’ve already defined the “why” before you start. They assume someone has already figured out what success looks like. In practice, that almost never happens. People jump straight to the tool, the process, or the coalition — and the question they were trying to answer gets lost.

    The 5P Framework was built to fix that gap. It’s not a replacement for every other model. It’s a forcing function that ensures two things happen that usually don’t: you define a measurable purpose before you start, and you measure performance against that purpose when you’re done.

    The core idea: The 5P Framework bookends the traditional “People, Process, Technology” approach with Purpose at the beginning and Performance at the end. It’s a simple structural change that eliminates the two most common reasons projects fail — unclear goals and unmeasured outcomes.

    The Five Models We Hear About Most

    Before we compare, let’s give each model a fair overview. These are the frameworks our clients and audiences reference most often. Each one has earned its place for good reasons.

    ADKAR

    Prosci • Jeff Hiatt • Individual change focus

    AwarenessDesireKnowledgeAbilityReinforcement

    Strengths

    • Individual-level focus makes it practical and personal

    • Prescriptive — tells you exactly what outcomes to target

    • Easy to diagnose where someone is stuck in the change process

    • Strong supporting methodology (Prosci 3-Phase Process)

    Where It Stops Short

    • Doesn’t define the project’s measurable purpose up front

    • Not designed for strategic-level or enterprise-wide planning

    • Assumes the “what” of the change has already been decided

    • No built-in performance measurement against original goals

    Kotter’s 8-Step Model

    Dr. John Kotter • 1996 • Leadership-driven change

    Create UrgencyBuild CoalitionForm VisionEnlist ArmyRemove BarriersShort-Term WinsSustain AccelerationInstitute Change

    Strengths

    • Clear, sequential roadmap for leaders

    • Strong emphasis on building organizational buy-in

    • Addresses both motivation and sustainability

    • Well-suited for large, visible transformation efforts

    Where It Stops Short

    • Top-down — assumes leadership drives all change

    • Linear process; real change is often iterative and messy

    • High-level roadmap without detailed execution guidance

    • No explicit mechanism for measuring outcomes against initial goals

    Lewin’s Unfreeze-Change-Refreeze

    Kurt Lewin • 1940s • Foundational change theory

    UnfreezeChange (Transition)Refreeze

    Strengths

    • Elegant simplicity — easy to understand and communicate

    • Force-field analysis (driving vs. restraining) remains powerful

    • Foundational — most modern models build on Lewin’s work

    • Emphasizes that change requires destabilization first

    Where It Stops Short

    • Too high-level for practical implementation

    • Assumes change is linear and can be “refrozen”

    • No guidance on people, tools, processes, or measurement

    • Modern organizations live in continuous change — you rarely refreeze

    McKinsey 7-S Framework

    Waterman & Peters • 1970s • Organizational alignment

    StrategyStructureSystemsShared ValuesStyleStaffSkills

    Strengths

    • Holistic — examines the full organizational system

    • Forces you to consider ripple effects of any change

    • Effective for mergers, restructuring, and strategic shifts

    • Recognizes that culture (Shared Values) is central

    Where It Stops Short

    • Diagnostic, not prescriptive — tells you what to look at, not what to do

    • Internally focused; ignores external market forces

    • Complex — seven moving parts with no clear starting point

    • Treats “Staff” as one of seven elements, not the priority

    Bridges’ Transition Model

    William Bridges • 1991 • Emotional/psychological focus

    Ending & Letting GoThe Neutral ZoneThe New Beginning

    Strengths

    • Deeply empathetic — acknowledges grief, loss, and confusion

    • Critical insight: change ≠ transition

    • Useful for understanding resistance and emotional barriers

    • Pairs well with other, more structural frameworks

    Where It Stops Short

    • Not a full change management framework on its own

    • No guidance on strategy, tools, process, or measurement

    • Doesn’t help you plan or execute — only understand the emotional terrain

    • Leaders often need more than empathy; they need a playbook

    Side-by-Side Comparison

    Here’s where these models actually differ in practice. We evaluated each framework across the dimensions that matter most when you’re leading a real project.

    Dimension ADKAR Kotter Lewin 7-S Bridges 5P Framework
    Defines measurable purpose No Partially No No No Yes — required
    Addresses people Yes (individual) Yes (coalition) Minimal Partial Yes (emotional) Yes — explicit step
    Defines repeatable process Partial Yes (8 steps) No No (diagnostic) No Yes — explicit step
    Addresses tools/platform No No No Partial (Systems) No Yes — explicit step
    Measures performance No No No No No Yes — required
    Focus level Individual Org (top-down) Org (theory) Org (diagnostic) Individual (emotional) Project (any scale)
    Prescriptive vs. descriptive Prescriptive Prescriptive Descriptive Diagnostic Descriptive Prescriptive
    Complexity 5 stages 8 steps 3 phases 7 elements 3 phases 5 steps
    Works for AI/tech projects Adaptable Adaptable Too abstract Adaptable Adaptable Built for it

    Where the 5P Framework Is Different

    The comparison table reveals a pattern: every major model has at least one critical gap. ADKAR doesn’t define the project purpose. Kotter doesn’t measure outcomes. Lewin and Bridges don’t address tools or process. McKinsey 7-S is diagnostic, not prescriptive. None of them were built with technology adoption in mind.

    The 5P Framework wasn’t designed to be the most comprehensive model. It was designed to be the most complete one — covering the full lifecycle of a project from “why are we doing this?” to “did it work?”

    Three Things the 5P Framework Does That Others Don’t

    1. It forces you to define success before you start.

    Every 5P engagement begins with Purpose — a measurable question. Not a vision statement. Not a mission. A question with a measurable answer. “What is the impact of our email marketing on revenue?” is a purpose. “Improve our marketing” is not. This single requirement eliminates the most common reason projects go sideways: nobody agreed on what “done” looks like.

    2. It puts technology in its place.

    Platform is the fourth P, not the first. This is deliberate. In the age of AI, the gravitational pull toward shiny new tools is stronger than ever. Teams want to start with “let’s use ChatGPT” or “let’s deploy this AI agent” before they’ve defined why, who, or how. The 5P Framework makes it structurally impossible to jump to Platform without first addressing Purpose, People, and Process.

    3. It closes the loop.

    Performance ties directly back to Purpose. Did you answer the question you set out to answer? Did the metrics move? If your performance metrics don’t map to your original purpose, you either have the wrong metrics or the wrong purpose. This feedback loop is conspicuously absent from every other major framework.

    A note on flexibility: Purpose always comes first. Performance always comes last. But People, Process, and Platform can be addressed in whatever order your situation demands — and they can be defined in parallel. The framework isn’t rigid. It’s bookended.

    Which Framework Should You Actually Use?

    The honest answer: it depends on what you need.

    If your challenge is individual adoption — getting specific people to change their behavior — ADKAR is excellent. It gives you a diagnostic for exactly where an individual is stuck and what to do about it. Pair it with the 5P Framework at the project level and ADKAR at the individual level, and you have both the strategic structure and the personal playbook.

    If your challenge is organizational momentum — getting a large enterprise to move in a new direction — Kotter gives you the political and cultural roadmap. Building coalitions, creating urgency, and generating short-term wins are critical skills for navigating organizational politics. The 5P Framework can sit underneath Kotter as the project-level structure for each initiative within the larger transformation.

    If your challenge is emotional resistance — people are grieving the old way or paralyzed in the “neutral zone” — Bridges gives you the language and empathy framework to meet them where they are. It complements the 5P Framework’s more structured approach.

    If your challenge is organizational alignment — you suspect the problem isn’t the change itself but a misalignment between strategy, structure, and culture — McKinsey 7-S gives you the diagnostic. Use it to identify the root cause, then use the 5P Framework to plan and execute the fix.

    If your challenge is any project that involves technology, AI, or digital transformation — the 5P Framework was built for this. It’s the only model that explicitly addresses tool selection as a distinct step that comes after Purpose, People, and Process.

    Bottom line: The 5P Framework is not a competitor to these models. It’s a complement that fills the gaps they leave — particularly around measurable purpose, explicit technology decisions, and performance accountability. Use the 5P Framework as your project-level operating system, and layer other models on top for specific challenges.

    Getting Started with the 5P Framework

    If you’re ready to try the 5P Framework on your next project, here’s where to begin:

    Start with Purpose. Take whatever project is on your plate right now and restate its goal as a measurable question. Not “implement AI in our marketing” — but “What is the measurable impact of using generative AI on our content production efficiency?” If you can’t state your purpose as a question with a measurable answer, you’re not ready to move forward.

    Then work the Ps. Identify the People (stakeholders, team members, customers). Define the Process (repeatable, documented steps). Choose the Platform (tools that serve the process, not the other way around). Set your Performance metrics (tied directly to your Purpose question).

    Then measure. After execution, come back to your Purpose statement. Did you answer the question? If yes, document it and share the results. If no, diagnose which P broke down and iterate.

    How AI-Ready Is Your Organization?

    Take the free AI Project Vital Signs Assessment — 20 questions based on the 5P Framework.

    Take the Free Assessment →

    Explore the 5P Framework:   Framework Hub  •  Change Management  •  AI Strategy  •  Getting Started

  • Your AI Pilot Does Everything Except Its Job

    I was at lunch the other day and the bartender started venting about the parking system in the building. Apparently it’s run by an AI startup — a buddy of the building owner — and here’s what it does: it scans your plate when you pull in, it logs your entry time, it tracks your space, it monitors duration, it calculates what you owe, it processes the data, it generates a record, and it… never sends you a bill.

    Eight steps. Eight whole steps of sophisticated data collection. And the one thing the system was actually built to do? It doesn’t do it.

    I haven’t stopped thinking about that since.

    The Most Expensive Way to Look Busy

    Here’s why that story matters beyond bad parking software. That’s every AI pilot I’ve seen in the last two years.

    Companies are deploying AI systems that do an impressive amount of work. They collect data. They process inputs. They run models. They generate outputs. Executives review dashboards. Teams attend status meetings. Everyone agrees the pilot is “progressing.”

    But nobody asks the uncomfortable question: is it actually doing the thing we needed it to do?

    Gartner’s prediction that 60% of AI projects will be abandoned by 2026 isn’t because the technology failed. It’s because the projects were never designed around an outcome. They were designed around activity.

    And there’s a massive difference.

    Activity Is Not Outcome

    This is the part that gets me. I talk to executives all the time who can describe their AI pilot in incredible detail. They’ll walk me through the architecture, the data sources, the model they chose, the vendor they’re working with. They’ll tell me about the meetings, the steering committee, the quarterly reviews.

    Then I ask: what business problem does this solve?

    And I get a pause.

    Not because they don’t have an answer. They do — they’ll say something about efficiency or insights or transformation. But when I push on what specific, measurable outcome they expected by now? That’s when it gets quiet.

    The parking system doesn’t have a technology problem. It has a “nobody defined what done looks like” problem. And so do most AI pilots.

    Why This Keeps Happening

    I keep coming back to the 10/20/70 framework we use at Trust Insights. Ten percent of your success with AI comes from the algorithms. Twenty percent comes from the technology and infrastructure. Seventy percent comes from business process and people.

    That parking system nailed the 10% — the algorithms clearly work, they’re scanning plates and calculating fees just fine. They probably have the 20% covered — the infrastructure is collecting and processing data. But the 70%? The actual business process that says “and then you send the bill”? Nobody built that part.

    It’s the same pattern everywhere. Companies invest heavily in the AI and the technology, and then completely skip the business process work that actually delivers the outcome. Because the process work isn’t exciting. Nobody gets a standing ovation at a board meeting for saying “we mapped out how the output connects to a business action.” They get a standing ovation for saying “we deployed an AI model.”

    The “Buddy” Problem

    Here’s the other part of that parking story that sticks with me. The AI startup got the contract because the founder is a buddy of the building owner. Not because they had a track record. Not because they demonstrated they could solve the parking billing problem. Because they knew a guy.

    This is happening in enterprise AI every single day. Vendor selection based on relationships, golf games, and conference cocktail parties. Not on whether the vendor can actually deliver the specific outcome you need.

    I’m not saying relationships don’t matter — they do, and they should. But when the relationship IS the evaluation criteria, you end up with an eight-step system that doesn’t send bills. You end up with an AI pilot that looks impressive in a slide deck and delivers nothing to the business.

    What “Done” Actually Looks Like

    If you’re running an AI pilot right now, I want you to answer one question: what does this produce that someone in the business can act on?

    Not “what does it process.” Not “what data does it analyze.” What does it produce? What’s the output that triggers a business action?

    For the parking system, it should be simple: it produces a bill, the customer pays, the building makes money. That’s the outcome. Everything else is infrastructure supporting that outcome.

    For your AI pilot, what’s the equivalent? If you can’t articulate it in one sentence, your pilot has the same problem as that parking garage. It’s collecting data and going nowhere.

    Stop Celebrating the Eight Steps

    The most frustrating part of all of this is that everyone involved in these pilots genuinely believes progress is being made. Because they’re measuring the wrong things. They’re measuring activity, not outcome.

    Meetings held. Data processed. Models trained. Dashboards built. Check, check, check, check.

    Bills sent? Revenue generated? Time saved on a specific process? Decision quality improved on a measurable metric?

    Crickets.

    This is a leadership problem, not a technology problem. The technology in that parking system works beautifully. It just doesn’t do the job. And leadership is so dazzled by the fact that AI is doing eight things that nobody noticed it isn’t doing the one thing that matters.

    If your AI pilot can’t answer “what business outcome did this produce this quarter” with a specific number, it’s not a pilot. It’s a very expensive hobby.

    And I say that as someone who genuinely wants these implementations to succeed. But they won’t succeed by accident. They’ll succeed because someone did the unsexy work of defining what done looks like before anyone wrote a line of code. That’s the job. And no amount of sophisticated data collection changes that.

    Katie Robbert is CEO of Trust Insights. When she’s not helping companies figure out whether their AI actually does anything, she’s probably listening to bartenders explain technology better than most consultants do.

  • You Can’t Build AI on a Museum: The Infrastructure Crisis Leadership Refuses to Look At

    Here’s a number I want you to sit with: Gartner predicts that through 2026, organizations will abandon 60% of their AI projects. Not because the AI didn’t work. Because the data underneath it was unusable.

    Sixty percent. That’s not a failure rate — that’s a pattern. And it’s a pattern that should alarm every executive who just signed off on an AI budget without asking a single question about the data infrastructure it’s supposed to run on.

    The Electric Train Problem

    There’s a metaphor I keep coming back to when I talk to companies about this. They want the electric train — the cutting-edge AI tools, the automation, the intelligent agents that are going to transform their business. And they should want those things. The capabilities are real.

    But their infrastructure is a horse-drawn carriage.

    Their data lives in seventeen different systems that don’t talk to each other. Their CRM hasn’t been cleaned since 2019. Their marketing data is in one platform, their sales data is in another, and their customer service data is in a third, and nobody’s reconciled them. Half their analytics still run on spreadsheets that one person maintains and nobody else understands.

    And into this environment, they’re deploying AI and expecting it to deliver insights.

    Here’s what the maturity data actually shows: fewer than one in five organizations report high maturity in any aspect of data readiness. Only 4% — four percent — have high maturity in both data governance and AI governance together. The foundation that AI needs to work doesn’t exist in most organizations. It’s not that it needs a tune-up. It fundamentally isn’t there.

    And yet the AI budget got approved. The data infrastructure budget didn’t.

    Meanwhile, Your Employees Built Their Own Railroad

    Here’s where this gets worse. While leadership has been debating which AI platform to buy, evaluating vendors, running pilots, and building PowerPoint decks about their “AI roadmap” — the workforce didn’t wait.

    Seventy-five percent of workers are already using AI at work. And 78% of them brought their own tools to do it. Nearly half are accessing AI through personal accounts, completely bypassing every security control, every data governance policy, and every compliance framework the company has in place.

    This isn’t hypothetical risk. Ninety percent of IT leaders say they’re concerned about shadow AI from a privacy and security standpoint. And here’s the part that should keep you up at night: 80% have already experienced negative AI-related data incidents. Not “might experience.” Have experienced. Past tense.

    So while leadership was still working on the strategy, the data was already flowing through tools and accounts that nobody in IT even knows about. Your employees didn’t wait for the electric train. They built their own railroad. And it runs through your proprietary data, your customer information, and your intellectual property.

    This Isn’t a Technology Conversation

    I know exactly what happens next in most organizations when they hear these numbers. They buy something. A data quality tool. A governance platform. An AI security solution. Another vendor, another contract, another implementation timeline.

    And it won’t fix the problem. Because this isn’t a technology problem. It’s a leadership problem.

    The reason most organizations have garbage data infrastructure isn’t because the right tool doesn’t exist. It’s because nobody wanted to do the boring work. Data governance is not glamorous. Data cleanup is not exciting. Reconciling seventeen systems into a coherent architecture doesn’t make for a great board presentation.

    This is the 10/20/70 reality again. Ten percent algorithms, 20% technology, 70% business process and people. The infrastructure crisis isn’t in the 10% or the 20%. It’s in the 70% that nobody budgets for, nobody staffs for, and nobody wants to present at the all-hands because it doesn’t have a good demo.

    You know what does have a good demo? A generative AI tool running on clean data. The problem is, nobody wants to do the 12 months of invisible work that makes that demo possible.

    What the Foundation Actually Looks Like

    If this sounds familiar, it should. I wrote a few weeks ago about AI training programs failing because companies skip the assessment work. This is the same pattern, just applied to data instead of people.

    The fix isn’t complicated to understand. It’s just hard to do.

    Audit before you buy. Before you sign another AI contract, figure out what data you actually have, where it lives, who owns it, and whether it’s usable. This is the equivalent of getting a building inspection before you renovate. Almost nobody does it because they don’t want to hear the answer.

    Govern before you glamour. You need data governance that people will actually follow — not a 60-page policy document that lives in SharePoint and nobody reads. Governance that works is governance that’s built into the workflow, not bolted on after the fact. And it has to address the shadow AI problem directly, because your people are already using tools you don’t know about.

    Accept the timeline nobody wants to hear. Most organizations need 6 to 12 months of infrastructure work before their AI investments will actually pay off. Nobody wants to hear that. Every vendor in the market is telling you that you can deploy AI in weeks. And you can — you can deploy AI in weeks. You just can’t deploy AI that works on data that doesn’t exist.

    Stop treating data readiness as IT’s problem. This is a business problem. It requires business decisions about what data matters, how it should be structured, and who’s responsible for maintaining it. When data readiness gets delegated entirely to IT, it becomes a technical project. It needs to be a strategic priority with executive sponsorship and business ownership.

    The Museum Metaphor

    You can put a touchscreen in a museum. You can add digital signage and interactive displays and a really nice app. And when visitors use the app, it’ll work. But the building is still a museum. The infrastructure is still old. The plumbing still leaks. The electrical can’t support the load.

    That’s what most companies have done with AI. They’ve put a touchscreen on a museum and called it a digital transformation.

    The organizations that will actually win with AI — not in 2026, but in 2028 and beyond — aren’t the ones buying the most tools right now. They’re the ones doing the foundation work that nobody wants to talk about at conferences because it’s boring and it takes a long time and you can’t put it in a press release.

    They’re the ones whose leadership looked at the shiny AI demos and said, “Great. Now show me the data architecture.” And when the answer was unsatisfying, they had the discipline to fix the foundation before building on top of it.

    That’s not a technology decision. That’s a leadership decision. And right now, most leaders are choosing the demo over the foundation. They’ll figure out the cost of that choice in about two years — when 60% of their AI projects are abandoned, and they’re wondering what went wrong.

    The data was never ready. Nobody wanted to look.