In partnership with

Some of the worst decisions I’ve ever made turned out just fine.

Not because they were smart.
Because luck covered for lazy thinking.

And some of the best decisions I’ve ever made blew up anyway.

Good process. Bad outcome.

If I judged those decisions purely by results, I’d learn the wrong lessons every time.

I’d repeat the lucky mistakes.
And I’d abandon the disciplined thinking that actually keeps me sharp.

This is a trap I see experts fall into all the time, especially when it comes to critical thinking.

You can be wrong and lucky, or right and unlucky. In the short run, luck often dominates skill.

Michael Mauboussin

And it doesn’t just affect your choices. It quietly chips away at your credibility…without you even noticing.

You Can't Automate Good Judgement

AI promises speed and efficiency, but it’s leaving many leaders feeling more overwhelmed than ever.

The real problem isn’t technology.

It’s the pressure to do more with less — without losing what makes your leadership effective.

BELAY created the free resource 5 Traits AI Can’t Replace & Why They Matter More Than Ever to help leaders pinpoint where AI can help and where human judgment is still essential.

At BELAY, we help leaders accomplish more by matching them with top-tier, U.S.-based Executive Assistants who bring the discernment, foresight, and relational intelligence that AI can’t replicate.

That way, you can focus on vision. Not systems.

You might be thinking, “So what? Results are what matter.”

Fair.

But when you judge decisions only by outcomes, you become unfair, teach the wrong lessons, and slowly lose trust, because people notice you’re rewarding luck and punishing good thinking.

Over time, you also make worse decisions yourself, repeating lucky mistakes and abandoning sound approaches that simply didn’t work once.

And eventually, you earn a reputation for being results-obsessed instead of thoughtful, which may win short-term, but costs you long-term.

I’ve worked with leaders on both sides of this. One builds trust and smart risk-taking, the other builds fear, silence, and safe play.

Which one would you rather work for?

The Distinction That Changes Everything

Here’s the reframe:

good decision is one that was well-reasoned based on what was knowable at the time.

Not what became obvious later.
Not what you learned after the fact.
What you had in hand then.

That’s the only fair standard.

Because here’s the truth:

A good decision can still produce a bad outcome.
That’s not incompetence. That’s bad luck.

And a bad decision can still produce a good outcome.
That’s not mastery. That’s good luck.

The goal isn’t to be lucky.

The goal is to be well-reasoned, because luck swings, but good thinking compounds.

When you evaluate decisions this way, something important shifts:

  • You can acknowledge a failure without scapegoating.
    “The result was bad. The thinking was sound. Here’s what we learned.”

  • You can examine a win without false confidence.
    “The result was good. But what was skill… and what was luck?”

  • You can give feedback that improves judgment—not just performance.
    Instead of “That didn’t work,” you ask: “Walk me through your reasoning.”

This is how you earn a reputation for fairness.
And fairness builds trust.
And trust is what turns expertise into influence.

The PROCESS Framework

Here’s how to evaluate decisions by the quality of thinking, not just the outcome.

Gif by power on Giphy

Let’s get it…

P — Prior information available

What did you actually know at the moment the decision was made?

Not what you know now.
Not what became obvious later.
What was reasonably knowable then?

This is where hindsight bias wrecks good judgment.
Once you know the ending, the plot feels obvious.
It wasn’t.

R — Reasoning applied

Given the information you had, was your reasoning sound?

Did you think clearly? Did you weigh the right factors? Did your logic hold up?

Good reasoning doesn’t guarantee good outcomes.
But it is what you control.
And it’s what you should evaluate.

O — Options considered

What alternatives were truly on the table?

Did you explore a real range of options or lock onto the first “good enough” answer?

Sometimes the better option only looks obvious in hindsight.
The fair question is: Was it visible at the time?

C — Constraints acknowledged

What constraints shaped the decision?

Time pressure.
Incomplete information.
Limited resources.
Organizational politics.

A decision made under heavy constraints shouldn’t be judged like one made with unlimited runway.
The question is: Was it the best move available in the real situation?

E — Execution quality

Was the outcome caused by the decision… or the execution?

A sound decision with poor execution can look like failure.
A weak decision with great execution can look like success.

Separate the thinking from the doing. They’re not the same skill.

S — Skill versus luck

What portion of the outcome was skill—and what portion was luck?

Be honest.

Most people want their wins to be skill…
and their losses to be bad luck.

Reality is messier. And that honesty is what sharpens your judgment.

S — Summary of learning

What did this decision teach you about your process?

Not “Should I repeat this exact outcome?”
But: “How do I improve how I decide next time?”

Because the goal isn’t to avoid bad outcomes.
You can’t control outcomes.

The goal is to refine your decision-making so that over time…skill beats luck.

What This Sounds Like

Instead of: “That launch failed. What went wrong with the decision?”
Try: “The launch missed targets. Before we judge the decision, what did we know at the time? Given that, was the reasoning sound? What was skill vs. luck? What do we change in the process next time?”

Instead of: “You should’ve seen that coming.”
Try: “Walk me through your thinking at the time. What did you know? What options did you consider? Does the reasoning still hold up, even though the outcome didn’t?”

The second versions are harder. They force you to separate what you know now from what was knowable then. But they’re also fairer.

And fairness is what builds trust.

The Influence Connection

Here's why this matters beyond being a better thinker:

Leaders who evaluate decisions fairly become safe to take risks around.

Think about it. If you punish every bad outcome, your team learns to avoid risk. They optimize for not failing, not for making good decisions. They hide mistakes. They play it safe. They don't bring you the bold ideas because bold ideas sometimes fail.

But if you evaluate the quality of thinking, not just results, something different happens. People feel safe to take calculated risks. They bring you the real information, not the filtered version. They admit mistakes early because they know you'll evaluate fairly.

That's how you build a team that actually thinks. And a reputation as someone worth following.

The leaders I've respected most were the ones who could say: "The outcome was bad, but the decision was sound. Let's learn what we can and move forward." And equally: "The outcome was good, but let's be honest that we got lucky. Let's not over-learn from this."

That's intellectual honesty. It's rare. And it builds influence that lasts.

The Hardest Part

I’ll be honest: this is hard to practice consistently.

When something works, you want credit. When something fails, you want a clean explanation.

Results feel concrete. Process feels squishy.

And sometimes you’ll do the work, evaluate fairly, separate outcome from decision, and still be in a culture that only rewards outcomes.

That’s frustrating. I’ve been there.

But I’ve found that even if others judge you by results, you can still train yourself to learn by process.

You can still improve your thinking in ways that compound.
You can still become the person who stays honest about skill and luck.

And the right people will notice.

They’ll notice you don’t rewrite history, you don’t scapegoat, and you stay fair when fairness is harder than blame.

That reputation is worth building.

A good decision is one that was well-reasoned, given the information available at the time, not one that happened to turn out well, so evaluate your thinking, not your luck.

LEVEL UP
AI Prompt: The Decision Quality Audit

Copy, paste, and complete this in your favorite LLM:

I want to evaluate a decision fairly, separate from its outcome. Help me assess the quality of the thinking, not just the result.

Here's the decision I made: [Describe it]
Here's how it turned out: [The outcome]

Help me:

1. Reconstruct what I actually knew at the time. What was knowable versus what I learned after?
2. Evaluate my reasoning. Given the information available, was the logic sound?
3. Assess the options I considered. Did I explore a reasonable range? Did I dismiss anything too quickly?
4. Account for constraints. What limitations was I operating under?
5. Separate skill from luck. What portion of the outcome was driven by my decision versus factors outside my control?
6. Identify process improvements. What should I learn about my decision-making, separate from the outcome?

POLL

CURATED ROUNDUP
What to Review This Week

In Case You Missed It!

The Bottom Line

A good decision is one that was well-reasoned given the information available at the time, regardless of how it turned out.

Judge decisions only by outcomes and you’ll reward luck, punish discipline, and lose trust.

Judge decisions by process and you become fair, build teams that take smart risks, and earn a reputation for intellectual honesty.

Results are feedback, not verdicts. Evaluate the thinking, not just the luck.

Thanks for reading. Be easy!
Girvin

What did you think of today's newsletter?

Your feedback helps us make the best newsletter possible.

Login or Subscribe to participate

Keep Reading

No posts found