Chapter 8: Falsifiability and Failure Modes
- Paul Falconer & ESA

- Mar 19
- 7 min read
A short story about a long argument
A few years ago, I found myself in a conversation that went nowhere.
The person I was talking to believed something I thought was clearly false. I had evidence. I presented it patiently. They listened, nodded, and then said: "That's interesting, but it doesn't change anything. You just don't understand the deeper picture."
I tried a different angle. Same result.
I tried another. Same.
After an hour, I realised: there was nothing I could have said that would have made a difference. Not because my evidence was weak, but because their belief was structured in a way that no evidence could touch. Every counter I offered was absorbed, reinterpreted, or dismissed as missing the point.
I walked away frustrated. But later, I realised the frustration was useful. It taught me something I hadn't yet named: the difference between a belief that can be tested and one that can't.
That difference is what this chapter is about.
By now you have a starting stance and a first guardrail. You begin from "not yet persuaded" when a new claim shows up. You know that the person making the claim carries the burden of proof, scaled to how strong, unusual, and high‑stakes their assertion is. That already puts you ahead of most of the information environment you move through.
There is one more foundational tool we need before we can talk about calibrating confidence and proportional scrutiny.
We need to ask: How could this be wrong?
That is the heart of falsifiability.
What falsifiability actually means
Philosophers and scientists argue about the fine print of falsifiability, but for our purposes we can keep it simple.
A claim is falsifiable if you can say, in plain language:
"If X, Y, or Z happened, I would take that as good reason to lower my confidence in this claim—or even give it up."
Falsifiability is not a guarantee that the claim is false. It is a willingness to name in advance what would count against it.
Compare:
"This medicine reduces headaches by at least 30% in most people who take it."
"This medicine works in mysterious ways that cannot be captured by numbers."
The first statement is falsifiable. If, in careful trials, people who take it do not have fewer headaches than those who don't, you have a clear reason to doubt the claim.
The second statement has insulated itself. Nothing you observe—no matter how many patients see no benefit—has been allowed to count against it. It has become, in practice, unfalsifiable.
Epistemological skepticism treats unfalsifiable claims with caution. Not because they are necessarily wrong, but because they are untethered. If nothing could ever count against a belief, it is very hard to keep it honest.
Everyday examples: "what would change your mind?"
You don't need a lab to use this tool.
You can bring falsifiability into ordinary life with a single question:
"What, if it happened, would change your mind about this?"
Take a few examples.
A friend says, "This tech CEO is a genius and always knows what he's doing."
You might ask, gently:
"Always? If, say, several major products failed in a row, or if we found internal emails showing he'd ignored serious safety concerns, would that change your view?"
If the answer is, "Well, no, I'd still think he's a genius," then what is being claimed is not just about performance; it has drifted into identity or faith.
Or:
Someone says, "This political movement is fundamentally corrupt."
You could ask:
"What would count as evidence against that? If they passed certain policies, or handled a scandal transparently, would that move you at all?"
You are not trying to trap people. You are trying to see whether the claim is hooked into the world in a way that allows reality to push back.
You can—and should—ask the same of yourself:
"What, if it happened, would make me question my view about this technology, this institution, this relationship, this part of my identity?"
If your honest answer is "nothing," that belief is probably doing some job other than tracking reality.
Common failure modes: how beliefs dodge falsification
Once you start looking for them, you will see certain patterns over and over again—ways in which people (including you) protect beliefs from ever being tested.
Here are four big ones.
1. Moving the goalposts
At first, you say, "If X happens, I'll reconsider."
Then X happens, and you say, "Well, actually, that doesn't count."
Example:
"If this AI model generates dangerous outputs even once, we should slow down."
It does. The response: "That's just an edge case; it doesn't really count."
Sometimes edge cases do matter less. The problem is when the threshold keeps sliding away every time the belief is threatened.
A healthier move is to set thresholds in advance—especially when stakes are high—and to treat crossing them as a real signal, even if the result is uncomfortable.
2. Changing the claim midstream
You start with a bold, testable claim. When evidence goes against it, you quietly replace it with a weaker, vaguer one and pretend you believed the weaker version all along.
Example:
Original: "This diet will transform your health in 30 days."
After no change: "Well, I meant more like a mindset shift, not literal health metrics."
This is a retreat from a falsifiable claim into an unfalsifiable one.
There is nothing wrong with updating your view. The problem is when you refuse to admit that the original claim has been challenged or falsified, and instead rewrite history so your belief never really risked anything.
3. Immunising the belief
You build a story in which any apparent counter‑evidence is automatically reinterpreted as support.
Example:
"The fact that the media criticises our movement just shows how right we are—they're scared."
"If scientists disagree with this theory, it proves they're part of the conspiracy."
In these frames, there is no possible observation that would count against the belief. Everything is pre‑classified as confirmation.
Once again, the issue is not that conspiracies or media bias never exist. The issue is the structure: the belief has been made self‑sealing.
4. Shifting from world‑claims to identity‑claims
A challenge to a belief about the world ("this policy reduces harm") is answered as if it were a challenge to a person's core identity ("you're saying I'm a bad person").
When that happens, the conversation often shuts down. To change your mind would feel like self‑betrayal, so falsification is never really allowed to happen.
All of these failure modes are understandable. They protect grooves, relationships, and identities. But if left unchecked, they make learning from reality almost impossible.
Where falsifiability has limits
It's important to be honest about where this tool does not straightforwardly apply.
Not every claim in your life can or should be treated like a scientific hypothesis.
There are at least three areas where falsifiability has to be handled with care:
Values and commitments."Human beings have inherent dignity" is not the kind of claim you test by running experiments to see whether treating people badly has bad consequences. It's a moral stance. You can examine its coherence and its implications, but you don't "falsify" it in a simple way.
First‑person experience."I am in pain" or "this music moves me" are not easily falsified from the outside. You can doubt your interpretations ("Is this pain physical or emotional?"), but the raw "what it is like" carries a different kind of weight.
Deep background assumptions.Every system of thought rests on some starting points: logic rules, basic trust in perception, the existence of a world outside your mind. You can question these in philosophy seminars. In daily life, you mostly have to treat them as working assumptions.
Epistemological skepticism is not about flattening everything into lab‑style falsifiability. It is about using falsifiability where it fits—especially for empirical claims about how the world works—while being explicit about which beliefs you are treating as values, starting points, or unresolved mysteries.
Falsifiability as an act of trust
There is a paradox here.
To make a claim falsifiable is, in a way, to make it vulnerable. You are saying: "If reality pushes back in these ways, I will let go." That can feel risky, especially if a belief has been important to you.
But in another sense, falsifiability is an act of trust:
Trust that reality is there, pushing back.
Trust that you will not collapse if you revise your map.
Trust that you can care about something and still look honestly at whether your belief about it is accurate.
This is where the tools from earlier chapters come back in.
The Null Hypothesis gives you a neutral starting point. The Burden of Proof tells you who needs to provide evidence. Falsifiability adds: "And here is how I will listen when the evidence says 'no'."
A small practice: writing "how this could be wrong"
Here is a practice you can try this week.
Pick one belief that matters to you. Not your most sacred value, and not something trivial. Something in the middle:
A belief about a technology ("This kind of system will mostly help/harm").
A belief about an institution ("This organisation is broadly trustworthy/untrustworthy").
A belief about yourself ("I'm bad at X", "People always Y with me").
Write it down as clearly as you can.
Then, in a few short bullet points, answer:
What observations would count against this belief?
Try to name at least two possibilities. Be concrete.
How likely is it that you would actually notice those observations if they occurred?
Would you hear about them? Would you dismiss them?
If you did notice them, what would you do?
Would you lower your confidence? Seek more data? Talk to someone you trust?
You are not committing to abandon the belief at the first sign of trouble. You are simply making a space in which reality is allowed to speak.
Over time, this habit—"how could this be wrong, and would I listen?"—will make your beliefs more responsive to the world they are meant to track. It will also make your eventual confidence, when you have it, more earned.
Comments