top of page

Chapter 10: Knowing Under Uncertainty and Risk

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • 2 days ago
  • 7 min read

The choice you cannot avoid

A few years ago, a friend faced a medical decision.

She had a chronic condition. There were two treatment paths. One was well-studied, with clear statistics: 70% of patients improved significantly, 20% saw modest improvement, 10% experienced side effects that ranged from uncomfortable to serious. The other was newer, less studied, with promising early results but much wider uncertainty bands.

She did everything this book has talked about so far.

She clarified the claims. She gathered evidence from multiple sources. She held the null hypothesis—not yet persuaded—while she investigated. She asked what would falsify each option. She calibrated her confidence: about 80% that the first path would work as advertised, maybe 60% that the second would, but with a chance of much greater benefit.

Then she looked at me and said: "I still don't know what to do."

She was right. All the tools had done their work. They had clarified the choice, narrowed the uncertainty, exposed hidden assumptions. But they had not—could not—eliminate the fundamental fact that she had to choose under conditions where the future was not guaranteed.

This chapter is about that moment.

The moment when the tools have done all they can, and you still face the gap between what you know and what you need to decide. The moment when you must act, even though certainty is not available.

By this point, you might be noticing a tension.

On the one hand, you have more tools than most people ever explicitly learn: questions, claims, evidence, the Null Hypothesis, Burden of Proof, falsifiability, confidence as a gradient, proportional scrutiny. On the other hand, the world refuses to wait while you gather perfect evidence. You have to decide anyway—what to do with your time, your money, your health, your vote, your relationships.

This chapter is about that tension.

We'll look at two levels:

  • Everyday decisions, where mistakes are painful but mostly reversible.

  • High‑stakes decisions, where the harm from being wrong can be large, delayed, or spread across many people.

You'll see that epistemological skepticism isn't about waiting for certainty. It's about matching your actions to your best current map, while keeping room to update as reality pushes back.

Everyday uncertainty: the job offer

Start with something familiar.

Imagine you've received a job offer in another city.

The role looks good on paper. The pay is better. The city has appealing aspects and some clear drawbacks. You have spoken to a few future colleagues, but you know you don't see the full picture: office culture, hidden politics, how you'll actually feel living there.

You cannot know in advance whether this move will turn out "well." You only know that staying put also has costs.

How do you decide?

You can walk through your tools:

  1. Clarify the question.

    It's not "Is this job perfect?" The real question might be: "Given what I care about over the next few years, does this offer move me in a better direction than staying?" That's already more workable.

  2. List the claims in play.

    • "The work will be meaningful."

    • "The team will be supportive."

    • "The city will suit my needs."

    • "The change will be worth the disruption."

  3. Start from null on each claim.

    Not yet persuaded either way. For each claim, ask: "What evidence do I actually have? How strong is it? What would count against it?"

  4. Check your evidence ladder.

    Maybe you're at:

    • Anecdotes from one enthusiastic hiring manager.

    • A couple of Glassdoor reviews (mixed).

    • One conversation with someone who used to work there.This is low‑to‑mid‑rung evidence. It shouldn't give you 90% confidence, but maybe it gets you to 50–60% on some points.

  5. Estimate stakes and reversibility.

    • This decision affects your daily life and relationships. Stakes are non‑trivial.

    • It is partly reversible: you could, in principle, leave after a year, but at real cost.That suggests you want more than a coin‑flip level of confidence, but you'll never get to 100%.

  6. Ask about asymmetries.

    • If you go and it's bad, can you recover?

    • If you stay and miss an opportunity that would have been good, how much will that cost you?

You are, in effect, doing a rough, human‑scale version of expected‑value thinking.

You can't assign precise numbers. But you can ask:

  • "On balance, given my current map, does going look more likely than staying to move me toward the life I want?"

  • "Can I structure the decision to reduce downside risk?" (For example: negotiating a clearer probation period, keeping your network alive in your current city, setting check‑ins with yourself at 3, 6, and 12 months.)

At the end of this process, you will still be uncertain.

Epistemological skepticism doesn't make that go away. What it does is ensure that when you act, you do so eyes open: clear on what you're assuming, how strong your evidence is, and what you'll watch for as reality delivers feedback.

High‑stakes uncertainty: the AI deployment

Now scale up.

Imagine you're on an advisory panel for a public institution considering deploying an AI‑based system for triaging citizen requests—deciding whose cases get priority.

The promise: increased efficiency and fairness. The worry: hidden biases, opaque errors, erosion of accountability.

You cannot run a perfect trial that captures all future consequences. You cannot foresee every failure mode. Yet delaying forever also has a cost: people are suffering under the current, overloaded system.

How do you advise?

Walk through the same tools, but with stakes explicitly in view.

  1. Clarify the core claims.

    • "This system improves efficiency compared to the status quo."

    • "This system reduces unfair disparities."

    • "Failure modes will be detectable and corrigible in time."

Each of these is a substantive claim about the world. Each is, in principle, falsifiable: you can imagine data that would count for or against.

  1. Null Hypothesis and burden of proof.

    You start from not‑yet‑persuaded, especially given the complexity and stakes. The vendor and the advocates for deployment carry the burden of proof. They must produce evidence, not just assurances.

  2. Evidence ladder and proportional scrutiny.

    For a system that affects vulnerable people at scale, anecdotes and internal test results are not enough. You should expect:

    • Independent audits.

    • Transparent performance metrics across groups.

    • Stress tests simulating edge cases.

    • Clear channels for appeal and correction.

    If that level of evidence is missing, proportional scrutiny says: the burden has not been met.

  3. Falsifiability and failure modes.

    You ask explicitly:

    • "What would count as evidence that this system is not reducing disparities?"

    • "How will we detect harms that aren't obvious from the dashboards?"

    • "What is the rollback plan if we discover serious issues?"

    You also look for the failure modes from Chapter 8: moving goalposts ("those cases don't count"), immunising the belief ("criticism shows how right we are"), shifting from world‑claims to identity‑claims ("if you question this system, you're anti‑innovation").

  4. Asymmetry and precaution.

    The asymmetry here is sharp: if you deploy and serious harms emerge, those harms may be hard to undo, especially for those least able to protect themselves. If you delay to gather better evidence, some people may suffer under the current system, but you avoid locking in a harmful new one.

    Precaution does not mean "never deploy." It means:

    • Start smaller (pilot rather than full rollout).

    • Build in strong monitoring from day one.

    • Give affected communities a voice in evaluation.

    • Commit in advance to pausing or revising if certain thresholds are crossed.

Under high stakes, epistemological skepticism leans toward "safe‑to‑proceed" rather than "prove it harmless"—but "safe" is not a feeling; it's a standard: evidence plus structure that make self‑correction likely.

Acting without guarantees

Both stories share a theme: you have to act without guarantees.

The tools in this book won't give you certainty. What they give you is a way to move through uncertainty with:

  • Clearer awareness of what you're assuming.

  • A more honest sense of how strong your evidence is.

  • A habit of matching your confidence and effort to stakes.

  • A commitment to building reversibility and feedback where you can.

There will still be times when you get it wrong.

  • You'll take a job that doesn't fit.

  • You'll advise caution where a bolder move would have been better, or vice versa.

  • You'll trust someone who lets you down, or hold back trust that could have helped.

The point is not to eliminate regret.

The point is to be able to say, afterward: "Given what I knew then, given the tools I had, and given the stakes, I acted as responsibly as I could." And then: "Now that I know more, how do I update?"

Two different kinds of error

It can help to distinguish two kinds of mistakes:

  1. Type A: acting too soon, on too little.

    You move quickly with high confidence, on thin evidence, in a high‑stakes setting.

  2. Type B: waiting too long, demanding too much.

    You refuse to move until you have near‑certainty, even when delay itself causes harm.

Epistemological skepticism, practiced well, tries to:

  • Reduce Type A errors in high‑stakes, low‑reversibility contexts.

  • Reduce Type B errors in moderate‑stakes, moderate‑reversibility contexts.

You're aiming for a posture that asks, almost automatically:

  • "Is this a domain where the harm of acting too soon is greater than the harm of waiting?"

  • "Or is this a domain where the harm of waiting is greater than the harm of trying and adjusting?"

In the job‑offer story, waiting a bit to gather more information has modest cost. In the AI‑deployment story, moving too fast with thin evidence has potentially large, hard‑to‑reverse costs. Your stance should reflect that.

A small practice: mapping stakes and reversibility

This week, take one decision you're facing—small or large—and sketch it on a simple two‑axis grid in your notebook:

  • Horizontal axis: stakes (low to high).

  • Vertical axis: reversibility (easy to hard to undo).

Place your decision roughly where it belongs.

Then ask:

  • "Given this position, how much evidence do I really need?"

  • "If this is high‑stakes and hard to reverse, have I been too casual?"

  • "If this is low‑stakes and easy to reverse, am I over‑delaying?"

Finally, write one sentence:

  • "Given what I know now, and given where this sits on the grid, here is what I will do, and here is what I will watch for that might make me change course."

You are not trying to engineer the perfect decision.

You are practicing the art of acting under uncertainty with your eyes open—which, in a world like this, is about as close as we get to wisdom.


Recent Posts

See All
Chapter 8: Falsifiability and Failure Modes

What would it take to prove you wrong? Falsifiability is the practice of naming failure modes—the conditions under which you would update a belief. This chapter shows why beliefs without failure modes

 
 
 

Comments


bottom of page