The Hindsight Trap: Why Leadership Teams Misjudge Their Decisions

How leadership teams tell good decisions from bad outcomes, and stop misjudging their own track record.

Hindsight bias does its worst work in a specific moment. The leadership team is around the conference table. They've been talking for an hour about a deal that didn't close, or a hire that didn't work, or a strategic bet that missed. Pick whichever blew up most recently. What's happening in the room is not analysis. It is a rationalization moving in a coordinated direction. We knew. We should have seen it. The signs were there.

This is The Hindsight Trap. The team is rewriting the record together, with everyone nodding along, and a decision that may have been entirely sound at the time gets recategorized as a mistake everyone supposedly saw coming.

The cost is not the failed deal. The cost is the next decision, and the one after that, made by a team that has lost track of whether their process is actually working or whether they have just been lucky.

This post covers how to tell the difference. The matrix that distinguishes good decisions from good outcomes. The audit method that reviews a decision phase by phase. And the two practices, the decision journal and the decision retrospective, that turn the audit into something your leadership team actually runs.

The Matrix: Four Outcomes, Two Decisions

Most leadership teams evaluate a decision by looking at the outcome. The outcome is bad; the decision must have been bad. The outcome is good; the decision must have been good. This is the most natural-feeling logical move in business, and it is also wrong.

Annie Duke calls this resulting. The cleaner academic name is outcome bias. Either way, the error is the same. You are using the result, which the decider did not have access to at the time, as the standard for judging the decision, which the decider had to make with the information available then. Results and decisions are different things. Conflating them produces predictable, expensive errors.

The cleanest way to think about this is Business as Poker, Not Chess. Chess is a game of perfect information where the better player almost always wins. Poker is a game of imperfect information where good plays sometimes lose and bad plays sometimes win. Over enough hands, skill rises through the noise. On any given hand, it does not. Business is poker.

The Decision Matrix makes the four possible combinations explicit.

Good decision, good outcome. Skill. The team called it right, and the result confirmed it. The playbook held.

Good decision, bad outcome. Bad luck. The team made the right call given what they knew, and the world did something they could not have predicted. A reasonable hire turned out to have a personal crisis. A sound strategic bet got blindsided by a regulatory change. The decision still passes the audit. The outcome is just one possible outcome from a probability distribution that mostly favored the call.

Bad decision, good outcome. Dumb luck. The team made the call without enough information, or with the wrong people in the room, or under social pressure to converge fast. The result happened to land well. This is the most dangerous quadrant because the result reinforces a process that should have been corrected. The team will reapply the same broken process next time, and eventually, the luck runs out.

Bad decision, bad outcome. Deserved consequence. The process was broken, and the result reflected it. The lesson is straightforward, which is why this quadrant is easy to learn from and also rare to actually study. Most teams blame circumstances rather than the process.

Why Leadership Teams Misjudge Their Own Decisions

The matrix tells you the four possible categories. It does not tell you why teams misclassify their own decisions so consistently. That is a cognition question, both individual and collective.

Two cognitive errors do most of the damage.

The first is outcome bias, which we just covered. The second is hindsight bias, the human tendency, after an outcome is known, to feel as though it had been predictable all along. The brain rewrites the memory file. What was actually a 60-40 call at the time feels, in retrospect, like an obvious 95-5. The team's collective sense of what was knowable shifts to match what is now known.

The two biases work together. Outcome bias says a bad result implies a bad decision. Hindsight bias says you should have seen it coming. Together they produce the verdict around the conference table: we knew, we should have known, the signs were obvious. It feels like clear-eyed accountability. It is a cognitive trap masquerading as accountability.

Here is the question that breaks the trap. What could you actually have known when you made the decision? Not what you would have liked to have known. Not what it looks like with hindsight. What was knowable, by anyone in the room, on the day the call was made.

Most of the time, when a team runs that question honestly, the answer is uncomfortable. Some of the things that now seem obvious were not knowable. Some were knowable, but nobody asked. Some were stated in the room and ignored. The audit lives in the difference.

Teams make this worse than individuals do. Here are three key patterns to watch for.

Convergent rewriting. Once one person says, "We should have seen it," the room agrees, and the version of events solidifies fast. Anyone who remembers it differently has to push against social momentum.

The loudest hindsight. The person with the strongest after-the-fact narrative is rarely the person who was actually right at the time. Confidence about what should have been seen is not evidence that anyone actually saw it.

Protecting the decider. When the founder-CEO made the call, the team would sometimes bend the audit to spare them. The decision gets reclassified as bad luck even when it was a process failure. The team protects the leader and pays for it on the next call.

What The Hindsight Trap Costs Over Time

The Hindsight Trap is expensive in two opposite directions. Both compound.

In one direction, the team reapplies bad decisions that produced lucky wins. The hiring playbook that has worked for five years was actually never that rigorous. The team was fortunate to have a string of candidates who turned out well. When conditions change, the playbook stops working, and the team is left running a process they cannot diagnose because the wins always validated it. They keep applying it for another quarter, another year, before someone admits the process was broken all along.

In the other direction, the team abandons sound decisions after one bad outcome. A real-time strategic call that was the right move, given what was knowable, results in a loss. The team retreats entirely from that style of bet, and a discipline that was working gets thrown out because of one unlucky pull.

I see this most often when a leadership team comes in confused. The same playbook that produced wins is now producing losses, and no one can articulate why. My read on these conversations is almost always the same. The wins were partly luck. The recent loss is what should have been happening more often. Or the conditions shifted under the team's feet, and they did not notice because the process kept generating wins by inertia. Either way, the audit reveals what the track record was hiding.

This is the deeper cost. Not the bad outcome itself. The eroded ability of the team to tell, in real time, whether their process is actually working. Once that erosion sets in, every new decision is harder to evaluate, and the team starts running on confidence built from past wins that may or may not have been earned.

The ICMAI Audit: Reviewing A Decision Phase By Phase

The matrix tells you whether the outcome is good or bad. The work on cognitive biases tells you not to confuse the outcome with the decision. Neither tells you, when a decision turns out to have been broken, where in the process it broke. That is what the ICMAI audit does.

ICMAI is the framework for assigning decision rights within a leadership team. Input, Collaborate, Make, Approve, Inform. Each phase is a different role. The audit applies the same five phases as evaluation lenses. For any past decision, you walk through each phase and ask whether the work of that phase was actually done.

Input. Was the information that fed the decision complete and accurate? Were there gaps the team knew about and ignored, or proceeded over because gathering more information would have slowed the decision past the deadline? A bad outcome with a clean Input phase is different from a bad outcome with a known information gap that was never closed.

Collaborate. Did the right voices get into the room, and did the room actually hear them? Or did the discussion converge fast around the loudest person, the most senior person, or the person with the strongest pre-existing position? Healthy collaboration produces friction. The absence of friction is usually the absence of real collaboration.

Make. Did the person who made the call have the authority and domain knowledge to do so? Or was the decision made by someone outside their lane because no one wanted to push back, or because the actual decider was unavailable? A decision made by the wrong person is a process failure even when the outcome turns out fine.

Approve. If the decision required approval, was the approval criterion clear, or did it get rubber-stamped because the approver did not have the time or the context to challenge it? Approval as a real check is different from approval as a procedural courtesy.

Inform. Did the people who needed to act on the decision actually understand it, including the reasoning behind it? Or did they receive a directive without context and execute against the wrong intent? A clean decision can still produce a bad outcome if the Inform phase is broken.

The audit is not a witch hunt. It is a structured way to figure out when an outcome lands badly, and where in the process the team can actually improve. Most decisions that produce bad outcomes have one or two phases that broke and three or four that worked. The audit identifies which is which.

The Journal And The Retrospective: From Data To Learning

The audit only works if you can actually reconstruct what the team knew and assumed at the time of the decision. Hindsight bias has already corrupted the record by the time the retrospective starts. This is why the audit needs two practices, not one. The decision journal captures the data. The decision retrospective converts the data into learning.

Start with the journal. Shane Parrish at Farnam Street has popularized the decision journal for executives, building on work by Annie Duke and others. The structure is simple. When the team makes a significant decision, each person who had a meaningful voice in it writes a short entry. The decision being made. The information available. The assumptions being relied on. The alternatives considered and rejected. The expected outcome. The level of confidence.

The confidence number matters. My standard recommendation is to aim for 80% confidence on most decisions. Pushing past 80% rarely justifies the time cost, unless you're faced with a critical, irreversible decision that may warrant a higher threshold. The journal asks whether you were actually at 80% when you made the call. If you were at 60% and voted yes anyway, that is information you need later. If you were at 95% and the outcome went sideways, that information is also useful later. Confidence calibration is one of the things teams improve fastest once they start tracking it.

Each team member writes their own entry. Not a consensus document. Individual entries preserve disagreement, which is what makes the retrospective useful. If everyone in the room remembers having the same view, you have lost the most valuable signal.

The decision retrospective is the conversation that turns the entries into learning. The format is borrowed directly from Lean/Agile, where retrospectives are a core team practice for reviewing recent work and improving the process. Apply the same discipline to a specific decision once the outcome is clear. Pull out the journal entries, walk the matrix, walk the ICMAI audit, and identify what is process learning and what is decision learning.

Process learning is about how the team decides. Was the Input phase rushed? Did Collaborate converge too fast? Did Approve get rubber-stamped? These learnings change how the team will run the next decision.

Decision learning is about the call itself. Were the assumptions reasonable given what was knowable? Was the confidence level honest? These learnings change how the team thinks about a specific class of decisions, like hires, partnerships, or major bets.

The two are different. Process improvements compound across all decisions. Decision improvements apply only to similar future calls. Mixing them is a common retrospective failure.

A leadership team running this practice well does retrospectives at two natural points. The first is when the outcome of a major decision becomes clear, regardless of timing. A failed hire or a major win deserves immediate retrospective while the journal entries are still fresh. The second is at quarterly planning sessions, where the team reviews several decisions whose outcomes have landed and looks for patterns. Quarterly is where process learnings accumulate into actual changes in how the team operates.

The journal without the retrospective is just notes in a file. The retrospective without the journal is a rationalization, because the team's collective memory has already been rewritten by hindsight bias. Both practices need to run together.

Diagnostic: Three Tests For A Recent Team Decision

Pick one decision your leadership team made in the past 90 days where the outcome is now clear. Run these three tests.

  1. Could you actually have known the things you now think were obvious? Not what you would have liked to have known. Not what it looks like with hindsight. What was knowable, on the day the decision was made, by anyone in the room. If most of the "obvious" signals were only apparent after the outcome, hindsight bias is at work, and the verdict against the decision needs to be revisited.

  2. Where in the ICMAI phases did the process actually break, if it broke at all? Walk Input, Collaborate, Make, Approve, and Inform separately. A bad outcome usually has one or two phases that failed and three or four that worked. If the team cannot point to a specific phase where the process broke, the outcome was probably not a process failure. It was probably a probability outcome.

  3. Would the team be willing to make the same decision again, given what was knowable then? If yes, the decision was sound, and the outcome was just bad luck. The team should not change the process, only the variables that have actually changed. If no, the team should be able to articulate which specific phase of the process they would run differently. If neither answer is clear, the team needs the journal and the retrospective running before the next significant decision.

The goal is not to assign blame. The goal is to separate the decisions that require process changes from the outcomes that only require acceptance.

Where To Start

Pick one decision from the past 90 days where the outcome is now known. Schedule a 30-minute team retrospective this week. Have each person who had a meaningful voice in the original decision write down, individually, what they remember knowing, what they assumed, and what they predicted at the time. This first retrospective will be the hardest, because the journal entries are reconstructed and hindsight bias has already corrupted the record.

That is fine. The point of the first retrospective is not perfect accuracy. The point is to start running the practice. Every retrospective after this one will be sharper because the journal will be running in real time, capturing what the team actually knew before the outcome was known.

If the team only does one of the two practices, the journal is the prerequisite. Without it, the retrospective devolves into shared rationalization. Once the journal is running, the retrospective takes care of itself.

Questions for You and Your Team

Use these to surface whether The Hindsight Trap might be operating in your leadership team's current decision-review practice. Each question lands harder when answered with a specific recent decision in mind, not in the abstract.

  • When was the last time your leadership team formally reviewed a decision whose outcome was now known? If the answer is "never" or "I'm not sure," your team is converting outcomes into verdicts without an audit. The matrix is doing nothing for you because nobody is using it.

  • If a decision your team made a year ago turned out badly, can anyone in the room recall, in writing, what the original confidence level was and what assumptions were being relied on? If not, hindsight bias has already rewritten the record, and the team is making its current decisions against a corrupted version of its own history.

  • When your leadership team last had a string of wins, did anyone ask whether the wins were earned or lucky? The track record everyone trusts is also the hardest to audit. The wins are where dumb luck hides, and dumb luck eventually runs out.

The goal isn't to have perfect answers. It's to surface whether The Hindsight Trap might be affecting how your leadership team learns from its own decisions.

Take the Next Step

If your leadership team is making decisions without a structured way to review them, the gap is rarely one bad call. It is a missing practice. ROADMAP12 builds the planning and review cadence that makes decision retrospectives a normal part of how your team operates. Quarterly planning sessions become the venue where the matrix, the ICMAI audit, and the journal-and-retrospective practice live and compound across the year.

Learn more about ROADMAP12: https://www.eckfeldt.com/roadmap12

Or book a call to discuss your leadership team's decision-review practice: https://www.eckfeldt.com/bookacall


Previous
Previous

How to Measure Any Leadership Responsibility (Even the Ones That Feel Unmeasurable)

Next
Next

Where Role Scorecards Go Wrong: A Field Guide to Four Common Mistakes