Strategic Choice

Strategic Decisions as Bets

Planted Feb 2026 Pruned Feb 2026

Treating strategic decisions as bets, with assigned probabilities, enhances clarity, accelerates learning, and improves judgment under uncertainty, transforming organizations from comfortable delusion to disciplined decision-making.

Strategic decisions are bets — commitments of resources made under uncertainty with the expectation of a favorable outcome.

The question isn't whether your decisions involve uncertainty. They always do. The question is whether you'll be explicit about that uncertainty or pretend it doesn't exist.

Most organizations choose pretense. They say "we're confident this will work" without quantifying what "confident" means. They avoid probability language because it feels like admitting weakness. But refusing to measure uncertainty doesn't eliminate it — it just prevents you from improving your judgment over time.

What happens when you treat decisions as bets

When you assign probabilities to outcomes — even rough ones — several patterns emerge.

Assumptions surface. Saying "I'm 65% confident this will work" forces you to articulate what would have to be true for success. You can't hide behind vague optimism.

Learning accelerates. When you track predictions against outcomes, you see patterns in your judgment. You discover which types of decisions you're systematically overconfident about, which risks you underestimate, which assumptions consistently prove wrong.

Resource allocation gets clearer. Comparing bets by their expected value (probability × impact) makes trade-offs explicit. A 30% chance of transformational impact versus an 80% chance of incremental improvement — that's a different conversation than "both are important initiatives."

The alternative is comfortable delusion. Organizations that avoid probability language don't eliminate uncertainty. They just refuse to measure it, which guarantees they won't improve.

The connection to "what would have to be true?"

Roger Martin's strategic choice framework asks: what would have to be true for this option to be the right choice?

For a new program to succeed, it would have to be true that participants value the outcome enough to commit significant time. It would have to be true that you can deliver quality at the scale required. It would have to be true that funders will support something without an established track record.

Each of those is a bet. Once you make them explicit, you can estimate odds. How confident are you that participants will commit that time? 70%? 40%? The number matters less than the discipline of assigning one.

Then you can ask: What evidence would change my confidence? What could I test quickly? Is this bet worth making at these odds?

Martin's framework surfaces the assumptions. Probability thinking forces you to evaluate them honestly.

The discomfort is the point

People resist this because assigning probabilities feels like false precision when information is incomplete. But you have enough information to make the decision, which means you have enough information to estimate odds.

The discomfort isn't a sign you're doing it wrong. It's a sign you're doing it right. Strategic decisions require judgment under uncertainty. Quantifying that uncertainty — even roughly — is how judgment improves.

Precision and calibration are different things. A probability estimate doesn't claim to be precise. It claims to be calibrated. If you say "65% confident" for 100 decisions, roughly 65 should succeed. That's testable. That's learnable. That's better than "we're confident" with no accountability.

Strategy is gambling — committing resources before outcomes are known. The question is whether you'll be a disciplined gambler (tracking odds, learning from outcomes, improving over time) or an undisciplined one (making bets without acknowledging them as bets).