top of page

Bridge Essay 2 - From Beliefs to Networks: When Thinking Becomes Systemic Risk

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • 5 hours ago
  • 5 min read

In the first bridge essay, we talked about how poor thinking habits—lazy reasoning, double standards, “just asking questions”—can become ruts in the mind. A single path, walked often enough, becomes the default route.

But here’s the thing: paths don’t stay alone. They connect. They form networks. And when enough ruts link up, it can feel like you have not just a bad habit here and there, but a whole landscape shaped by entrenched thinking.

That’s what this essay is about. How individual habits of thought cluster together into belief networks. How those networks can become self‑reinforcing. And how we might begin to recognise—and perhaps measure—the systemic risk that emerges when thinking becomes a closed system.

When Paths Connect

Imagine a village. At first, there are a few trails: one to the well, one to the fields, one to the neighbour’s house. Each is worn by use. Over time, people start joining them—cutting shortcuts, linking paths. What began as separate trails becomes a web. Now you can get from the well to the fields without ever leaving a beaten track.

Beliefs work the same way. In the brain, the principle is simple: neurons that fire together wire together. If you develop the habit of lazy thinking about politics, that habit doesn’t stay quarantined. It leaks. The next time you think about science, or health, or money, you’re not guaranteed to, but you’re more likely to reach for the same mental shortcut. If you’ve trained yourself to feel a sense of superiority from “knowing the truth” about one conspiracy, that feeling attaches to other beliefs. The ruts link up.

In the technical papers, we call this cognitive synergy—the way that different fallacies reinforce each other, creating a network that is stronger than any single belief.

Scaffolding: How Beliefs Prop Each Other Up

Some beliefs are foundational. They act like the main beams of a house (or the thickest threads in the tangle we’ll get to later). Others are like the walls—built on top of the foundation, leaning on it for support.

If you hold a deep, identity‑level belief—say, that “institutions cannot be trusted”—that one belief can scaffold many others. Distrust in medicine, distrust in media, distrust in science, distrust in government… each becomes a logical extension of the first. The foundation belief doesn’t have to be true; it just has to feel true. And because it’s the foundation, it’s rarely questioned. To question it would be to risk the whole structure.

This is ideological scaffolding. A single, deeply entrenched belief can become the anchor for a whole cluster of secondary beliefs. And because they’re all tied together, evidence against one feels like evidence against the whole structure. That’s why people sometimes defend a minor belief as if their life depended on it—to them, it might feel that way.

Spillover: When Bad Thinking Crosses Borders

Sometimes beliefs spread not because they’re logically connected, but because the habit of thought has become generalised. Spillover isn’t always harmful—learning to question one thing can help you question others. The concern here is a particular kind of spillover: when dismissal and suspicion become the default for any evidence that challenges the network.

You learn to distrust one source, and soon you distrust all sources. You get used to dismissing evidence in one domain, and soon you dismiss it everywhere.

In the formal model, this is called spillover effect. It’s why someone who rejects climate science might also reject vaccine science, even though the topics have nothing to do with each other. The way of thinking—dismissal, suspicion, shortcut—has become the default. And it often starts with that foundational distrust of institutions we described earlier: once you’ve learned to dismiss one institution, dismissing the next feels like consistency.

What Does a Belief Network Look Like?

Imagine a tangle of threads, each one a belief. Some threads are thick—they’ve been walked many times, are central to the network. Others are thinner, dependent on the thicker ones for support. Pull one thick thread, and the whole tangle moves.

That tangle is what we call a belief network. In the technical papers, we try to give it a number: the Composite NPF Index (CNI) . It’s a proposed way of summarising how entrenched the whole network has become—not just one belief, but the system they form together.

A low CNI (say, 0.2) would mean the threads are loose, flexible, easy to rearrange. A high CNI (say, 0.8) would mean they’re knotted tight, resistant to being untangled. The exact numbers are still a hypothesis, but the idea is simple: some networks are healthy, open to new evidence; others are closed and self‑sealing—what the formal model would describe as high‑CNI networks. (For the proposed thresholds and their neurocognitive correlates, see Paper 2, Section 9.)

Why Context Matters: Culture and Calibration

Here’s a complication: what counts as a “tight” network depends partly on where you’re standing. In some cultures, questioning authority is seen as a virtue; in others, harmony is valued above all. The same network might be judged differently depending on the norms around it.

The formal framework proposes a cultural calibration parameter—a way of adjusting how we interpret a network’s tightness based on the cultural context. It is a theoretical proposal, not a validated tool. (The technical appendix offers a simple decision tree for choosing a calibration parameter, but it is explicitly labelled as provisional.) It’s a recognition that “systemic risk” is not a universal label; it has to be read against the background of what is considered normal, healthy, or acceptable in a given setting.

For now, the important thing is to notice: a belief network that looks dangerously closed in one culture might look perfectly ordinary in another. The goal is not to pathologise difference, but to understand when a network becomes genuinely resistant to evidence in ways that harm individual and collective flourishing.

So What?

You might be thinking: this is interesting, but what does it mean for me?

It means that if you want to change your thinking, you can’t always do it belief by belief. Sometimes you have to look at the network. Sometimes the most entrenched belief isn’t the one you argue about most; it’s the foundation that all the others rest on.

If you notice that your distrust of one institution has become a blanket distrust of all institutions, that’s a sign of spillover.If you feel a sense of superiority attached to a whole cluster of beliefs, that’s a sign of scaffolding (the foundation feeling identity‑level).If new evidence never seems to make a dent—because it would threaten the whole structure—that’s a sign of cognitive synergy: the network has become self‑sealing—or is at least behaving that way for now.

The good news is that networks can be untangled. It takes time, and it often takes help—someone outside the network who can point out the scaffolding you’ve stopped seeing. But it’s possible. The same plasticity that lets ruts form also lets new paths be carved.

Go Deeper

This essay introduces the concept of belief networks and the Composite NPF Index (CNI). For the formal model, including how CNI is calculated, the proposed thresholds, and the research behind it, see:

  • Paper 2: The Composite NPF Index – Belief Networks and Systemic Risk

  • Read on SE Press

  • Download from OSF (DOI: 10.17605/OSF.IO/C6AD7)

    Key sections:

    • Section 2 – Why Beliefs Cluster (cognitive synergy, scaffolding, spillover)

    • Section 9 – Thresholds & Neurocognitive Correlates (proposed CNI ranges)

    • Appendix B – Cultural Calibration Decision Tree

Like all papers in this series, Paper 2 is a formal hypothesis: simulation‑supported, not yet field‑validated. The CNI is a proposed measure, not a settled diagnostic tool. If you’re reading Paper 2, you’re stepping into the hypothesis layer of the work; feedback, critique, and adversarial tests are welcome.

The next bridge essay will explore how these networks spread between people and between humans and AI—and how we might build cognitive immunity.

End of Bridge Essay 2



Recent Posts

See All

Comments


bottom of page