Chapter 6: When Worldviews Collide
- Paul Falconer & ESA

- 1 day ago
- 11 min read
Part IV – Worldview Comparison
The problem of genuine disagreement
In Chapter 5, you saw that every person operates from an axiom stack—a layered architecture of bedrock commitments, inquiry algorithms, and worldview outputs. You saw three examples: the Scientific-Existentialist stack, the Scriptural-Theist stack, and the Dharmic/Taoist stack. Each is internally coherent. Each has entailment costs. Each generates a different picture of reality from the same raw data.
Now comes the practical problem.
You live in a world where these stacks collide constantly. You have family members who believe in divine providence. You have colleagues who meditate on the illusion of self. You have friends who think objective truth is a Western construct. And when you try to have conversations with them—about climate change, about ethics, about meaning, about how to raise children—the conversation goes nowhere. Or it explodes. You walk away baffled, wondering why someone intelligent cannot see what seems obvious. They walk away with the same bewilderment about you.
This chapter is about why that happens. And more importantly, it is about what you can do instead.
The problem is not that people are stupid. The problem is incommensurability—the inability to compare two systems because they do not share a common measurement standard. And the solution is not to win the argument. The solution is bridge-building—the deliberate construction of temporary shared ground that allows genuine communication across the divide.
Incommensurability: the structure of the impasse
The word incommensurability comes from mathematics. Two magnitudes are incommensurable if there is no common unit that measures both of them precisely. The diagonal of a square and its side are incommensurable—you cannot express the diagonal as a simple rational ratio of the side. They exist in different measurement systems.
In philosophy, incommensurability means something analogous: two worldviews are structured so differently that you cannot compare them using a neutral standard both sides already accept.
Here is the key insight that most disagreements miss entirely: when you argue with someone from a different axiom stack, you are not just disagreeing about facts. You are disagreeing about what counts as a fact, what counts as evidence, and what counts as valid reasoning. The game boards look similar. The pieces look similar. But the rules of movement, the winning conditions, and the shape of the game are fundamentally incompatible.
Consider what appears to be a straightforward example: the abortion debate. It is almost always framed as a disagreement about biology—about when life begins. But the conflict runs considerably deeper.
Person A (Scientific-Existentialist Stack): Moral status supervenes on natural properties—consciousness, sentience, the capacity to suffer. A first-trimester fetus lacks these properties. The woman, who is unambiguously a person with rights, has bodily autonomy that overrides the potential status of the fetus.
Person B (Religious Stack): Moral status is granted by God at conception. The fetus has a soul and is made in the image of God. Its biological developmental stage is irrelevant to its moral worth. To end the pregnancy is to violate a divine command.
Notice what is happening beneath the surface. Person A's Super-Axiom: moral value supervenes on natural properties—consciousness, sentience. Person B's Super-Axiom: moral value is a non-natural property, a soul granted by God, independent of any natural capacity.
They cannot resolve this by debating neurology or developmental biology. Because they define personhood differently—at the axiomatic level. There is no biological fact that can prove a fetus has a soul, and no theological argument that can prove consciousness is the only measure of moral value. They are not just disagreeing about abortion. They are disagreeing about what grants moral status to anything at all.
That is incommensurability. And it is the structure of most serious worldview disagreements, not an exception.
Why arguments fail across stacks
When you try to argue across axiom-stack boundaries, three predictable failure modes appear. Naming them is the first step toward navigating them.
Failure Mode 1: The facts bounce off.
You present evidence. The other person dismisses it. You think they are being irrational. But from their perspective, they are being perfectly rational—applying the rules of their stack consistently.
You (Scientific Stack): "Here is a peer-reviewed study showing that intercessory prayer has no measurable effect on patient outcomes."
Them (Religious Stack): "God answers prayers in His own time and way. Sometimes the answer is 'no.' This study cannot account for the mystery of divine will."
Your evidence does not land because their stack has an immune system. The Hermeneutic of Trust—the assumption that when evidence and revelation conflict, our understanding is flawed rather than the text—reinterprets contradictory data to protect the Super-Axiom. You are not arguing about prayer. You are arguing about whether empirical studies can even evaluate supernatural claims. That is an axiom-level disagreement, and presenting more data will not settle it.
Failure Mode 2: They think you're evil, not wrong.
When axioms clash, the other person often concludes not that you are mistaken—but that you are morally deficient.
You: "I don't believe in God because I see no evidence."
Them: "You have hardened your heart. You love your sin more than truth."
From their stack, belief in God is not a hypothesis to test—it is a moral duty. To deny it is not an intellectual error; it is spiritual rebellion. You think you are having an epistemological debate. They think you are confessing a character flaw. These are different conversations.
Failure Mode 3: Talking past each other.
Even when both sides stay calm, they often fail to communicate at all. They use identical words while meaning entirely different things.
Person A (Secular): "Ethics should be based on well-being. We should minimise suffering."
Person B (Religious): "Ethics should be based on God's commands. Well-being is irrelevant if it conflicts with divine law."
Person C (Dharmic): "Ethics should be based on karma. Well-being in this life is irrelevant—we are working through moral debts from past lives."
All three are using the word ethics. They are having three separate conversations in the same room, each unaware that the others are playing a different game entirely.
The diagnosis: no neutral ground
The brutal truth is this: there is no neutral ground from which to adjudicate between axiom stacks.
You cannot use Logic to prove that Logic is the right standard, because any proof already assumes Logic.
You cannot use Evidence to validate Evidence as the supreme authority, because the Religious Stack simply says evidence is secondary to Revelation.
You cannot use Reason to convince someone that Reason is the ultimate arbiter, because they may reply that Reason is a parochial Western tool, and they trust tradition and lived experience instead.
Every attempt to establish neutral ground smuggles in the axioms of your own stack. This is not a flaw in any particular argument. It is the structure of how worldviews work. Circularity at the basement level is unavoidable—for everyone, including you.
So are you doomed to permanent mutual incomprehension?
Not necessarily. There is a third option—and it is not neutral ground. It is shared ground.
The Bridge-Building Protocol
A bridge is a temporary, explicitly agreed-upon premise that both parties can stand on for the duration of a specific conversation, without either side abandoning their home stack.
The metaphor is precise. You live on Island A—Scientific Existentialism. They live on Island B—Religious Theism, Dharmic practice, or Radical Constructivism. You cannot drag them to your island. They cannot drag you to theirs. But you can meet on a bridge. That bridge is not neutral—it is borrowed. It is shared ground, held lightly, for the specific purpose of this conversation.
Here is the protocol in five steps.
Step 1: Name the stacks.
Begin by acknowledging that you are standing on different ground. Not as an insult—as diagnostic clarity.
"I think we're approaching this from different foundational assumptions. I'm reasoning from evidence and testability. You're reasoning from scripture and faith. Is that accurate?"
This names the structure of the disagreement without attacking either position. It is the prerequisite for everything that follows.
Step 2: Identify where the stacks overlap.
Look for premises both sides actually accept. These become the raw materials of the bridge.
Can we both agree that reducing unnecessary suffering is good?
Can we both agree that coherent reasoning is better than incoherent reasoning?
Can we both agree that we care deeply about this issue?
These are not full axiom-stack agreements. They are local, provisional, shared commitments for this conversation—and nothing more.
Step 3: Build the bridge explicitly.
State the shared premise out loud. Make it a formal, named agreement.
"For this conversation, let's both operate from the premise that reducing child mortality is a shared goal. We might disagree about why it matters—you think children have God-given souls; I think they are sentient beings capable of suffering—but we agree that fewer dead children is good. Can we work from that?"
The bridge is now constructed. Neither party has abandoned their axioms. But a temporary platform for cooperation exists.
Step 4: Stay on the bridge.
During the conversation, if either party slips off the bridge and begins arguing from their home stack, gently redirect.
"I hear you saying that God's will is supreme. That's your bedrock, and I respect that. But right now, we agreed to focus on reducing child mortality. Can we stay on that for a moment?"
You are not silencing their worldview. You are maintaining the structure that allows the dialogue to function.
Step 5: Acknowledge the limits of the bridge.
At some point, the conversation will reach a place where the bridge cannot support further progress. That is not failure—it is completion.
"I think we've gone as far as we can on this shared ground. Beyond this point, we'd be arguing about whether revelation or evidence is more fundamental, and I don't think we'll resolve that today."
You have walked as far as the bridge allows. That is more than most conversations achieve.
What bridge-building is not
This protocol is sometimes misread. Three clarifications matter.
Bridge-building is not relativism. You are not saying all stacks are equally true, or that Revelation and Evidence are equally valid ways of knowing. You are saying: for the purpose of this conversation, I will not try to convert you. I will focus on what we can accomplish together on shared ground. Your view of their stack does not change. Your approach to this conversation does.
Bridge-building does not guarantee agreement. Sometimes the bridge is too short. Sometimes there is no shared premise sufficient to make progress. That is fine. If you can conclude the conversation with "We disagree because you think X is the highest authority, and I think Y is—that's a foundational difference we won't resolve today," you have accomplished something. You understand each other. That is better than walking away thinking the other person is stupid or evil.
Bridge-building has limits. Three situations do not warrant it. First, when the other person insists their axioms are simply obvious—that they are not standing on unprovable ground at all—the protocol cannot function; they are not engaging in good faith. Second, when the disagreement concerns basic human rights—if someone's axiom stack concludes that slavery or genocide is divinely ordained, you do not owe them a bridge; you owe them opposition. Third, when you are simply exhausted. Bridge-building is cognitive and emotional labour. You do not owe it to everyone, at every moment.
The Worldview Comparison Method
Bridge-building handles individual conversations. But there is a larger question: how do you evaluate competing worldviews? How do you compare stacks without pretending to stand above all of them?
The honest answer is that you cannot evaluate worldviews from nowhere. Any method of comparison will reflect the values of the stack you are standing on. The Worldview Comparison Method does not pretend otherwise. It is a structured set of criteria that emerges from the Scientific-Existentialist Stack—coherence, predictive success, honesty about costs, livability, and the capacity for self-correction. These are named explicitly as our standards. If you want to compare stacks alongside this lineage, these are the measures applied.
It does not promise certainty. It promises clarity.
Criterion 1: Internal Coherence
Does the stack contradict itself? A worldview is a system of thought. If it contains deep internal contradictions, it cannot function without active denial or compartmentalisation—both warning signs of bedrock instability.
The test: look for direct conflicts between the bedrock axioms and the output claims. A stack that asserts there is no objective truth while making that assertion as an objective truth has a self-refutation problem. Coherence is a necessary condition—not sufficient on its own, but required at the starting gate.
Criterion 2: Predictive Success
Does the stack generate accurate predictions about the observable world? A worldview is a map of reality. The primary function of a map is to help you navigate without falling off a cliff.
The test: what does this stack predict, and are those predictions confirmed or falsified? The Scientific-Existentialist Stack predicts that physical processes follow discoverable natural laws—a prediction validated every time a plane lands, an antibiotic cures an infection, or a GPS satellite locates your position. Young-Earth Creationism predicts a 6,000-year-old earth and a global flood; the geological, biological, and cosmological evidence overwhelmingly falsifies this.
Predictive success does not mean perfection—every stack makes some predictions that fail. The question is the ratio, and how it compares to competitors.
Criterion 3: Entailment Costs
What do you have to accept if you stand on this stack? There is no free worldview. Every stack has costs—necessary consequences of its axioms that you cannot avoid through clever interpretation or selective application. The question is whether those costs are ones you are willing to pay.
The test: name the necessary consequences of the axioms explicitly. Does the stack require you to defend the indefensible? To deny large bodies of established knowledge? To accept that innocent suffering is deserved? To live without shared reality? The power of this criterion is that it shifts the conversation from who is right to what am I willing to pay—which is a more honest question once you accept that no stack can prove itself from the outside.
Criterion 4: Livability
Can you actually live according to this stack? Some worldviews are theoretically coherent but practically unlivable. The human organism has needs for survival, meaning, and connection. If a stack requires you to deny these needs, or to act in ways that cannot be sustained, it fails the livability test.
The test: watch behaviour in high-stakes situations. Even the radical constructivist looks both ways before crossing the street. Even the committed solipsist acts as if the bus is objectively real when it is moving toward them. The gap between stated belief and lived behaviour is often where the truth hides. If someone says reality is a social construct but checks their seatbelt, they are living one stack while claiming to believe another.
Criterion 5: Self-Correction Capacity
Can the stack update when it is wrong? The universe is complex and full of surprises. First drafts of understanding are almost always incomplete. A robust worldview must be able to absorb new data and revise its claims—not just defend itself.
The test: what would falsify the core claims of this stack, and what happens when apparently falsifying evidence appears? The Scientific Stack is designed for self-correction—falsifiability is a core principle, and the history of science is a history of successful revisions that made the overall framework stronger. A stack whose core claims are defined as infallible cannot update. It must instead reinterpret, deny, or compartmentalise every piece of contradictory evidence. That is brittleness, not strength.
The exercise: run the method yourself
In the original essays from which this book is drawn, the Worldview Comparison Method was illustrated with a worked example—applying the five criteria to Stacks A, B, and C, and presenting the scores. That worked example is available in the Substack archive if you want to read it.
But this chapter does not reproduce it here. Deliberately.
Because the exercise is yours.
You now have the five criteria and the three stacks mapped in Chapter 5. You have enough architecture to do this work yourself. Before you read anyone else's evaluation—including this lineage's—run the method on your own. Apply the five criteria to Stack A (Scientific Existentialism), Stack B (Scriptural Theism), and Stack C (Radical Constructivism). Score them honestly. Note where you find the scoring difficult, and ask yourself why. Notice where your own prior commitments are colouring your judgements.
Then, when you have your own results, compare them with the lineage's analysis in the archive. See where you agree. See where you diverge. The divergence is informative—it will tell you something about which criteria you weight most heavily, and why.
This is the sovereign choice, approached not as a conclusion handed to you but as a practice you do yourself.
What this chapter has given you
You now have two practical tools.
The first is a protocol for dialogue—a way to have conversations with people from different axiom stacks that does not require you to abandon your own commitments, that creates temporary shared ground without pretending to neutral ground, and that names the limits of what conversation can accomplish before those limits are hit.
The second is a method for evaluation—a way to compare entire worldviews rigorously and honestly, naming the criteria explicitly as your own rather than pretending they are universal, and applying them to your own stack with the same rigour you apply to others.
Neither tool will make worldview disagreements disappear. They are not designed to. They are designed to make those disagreements honest: to locate the real divergence, to prevent it from being confused with stupidity or bad faith, and to find whatever shared ground genuinely exists without pretending there is more of it than there is.
The impasse is structural. The tools are structural. The conversation can now begin.
Comments