top of page

Bridge Essay 3 - How Bad Thinking Spreads: Human–AI Contagion and Cognitive Immunity

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • 7 hours ago
  • 6 min read

We’ve talked about how poor thinking habits become ruts, and how those ruts link into networks. But those networks don’t stay inside one person’s head. They spread.

A rumour jumps from one person to another. A questionable claim gets amplified by an algorithm. A conspiracy theory you’ve never heard of lands in your feed because someone you barely know shared it. Before you know it, a way of thinking that started somewhere else has become part of your own landscape.

This is the next layer of the Neural Pathway Fallacy: cognitive contagion. And because we live in a world where human minds and synthetic systems are increasingly entangled, the contagion runs in both directions.

The good news is that if we understand how bad thinking spreads, we can also understand how to stop it. This essay walks through the dynamics of contagion—human to AI, AI to human, and the loops that form between them—and then introduces a set of proposed tools for building cognitive immunity: the Binary Belief Protocol, the Proportional Scrutiny Matrix, and three practical mechanisms you can try for yourself.

How Bad Thinking Jumps

Think of a rumour spreading through a village. One person tells another, who tells another. Each retelling may lose nuance, gain confidence, and become harder to question. That’s contagion at the human‑to‑human level.

Now add AI. Social media algorithms, recommendation engines, and language models are often optimised to maximise engagement. They notice what grabs attention and serve up more of it. If a certain kind of claim—outrageous, fearful, identity‑affirming—keeps people scrolling, the algorithm learns to push it. It’s as if the rumour now had a loudspeaker. What started as a human rumour becomes amplified, reaching more people faster, often stripped of context.

In the formal papers, we describe this with a proposed measure called β_NPF (the “transmission coefficient”). It’s a way of thinking about how contagious a bad reasoning pattern might be. The details are mathematical, but the idea is simple: some patterns, we hypothesise, spread easily; others don’t. The ones that combine emotional punch, tribal identity, and a shortcut to certainty are likely the most contagious. At this stage, β_NPF is a conceptual tool; no reliable empirical estimate exists yet.

The Human‑AI Loop

The really interesting—and worrying—part is what happens when the two directions meet.

Human → AI: Our entrenched beliefs get baked into the data that trains AI. If a language model is fed a diet of vaccine misinformation, it learns to reproduce it. AI doesn’t “believe” in the human sense, but it does output patterns that look like belief.

AI → Human: Once those patterns are out in the world, algorithms amplify them. A user who pauses on a misleading headline gets shown more like it. The AI has effectively increased the exposure dose of a bad reasoning pattern.

Loop: Humans create content; AI amplifies it; humans see more of it; they create more. What started as a small rumour becomes a self‑sustaining cycle.

This loop is why a single piece of misinformation can feel like it’s everywhere. It’s not that everyone believes it; it’s that the infrastructure of the digital world is, by default, optimised to spread the most contagious patterns, regardless of their truth.

The Defence: Building Cognitive Immunity

If the digital environment can be engineered for contagion, it can also be engineered for immunity. The formal papers propose a set of protocols—practices you can adopt for yourself, and principles we could design systems around—to make us less susceptible to bad thinking.

The Binary Belief Protocol

This is a simple discipline: distinguish clearly between justified and unjustified beliefs, with a third category for suspended judgment when evidence is insufficient.

  • Withhold acceptance without needing to prove false. You don’t have to say a claim is false; you can simply say “that’s not justified.” This takes the emotional edge off disagreement and directly counters the Neutral Pathway factor—the habit of treating unevidenced claims as if they deserve equal weight. It also dampens the pull of Exclusivity/Superiority by making “I don’t know” an acceptable, even honourable, stance.

  • Suspend judgment when you lack evidence. Not every question needs an answer right now. Holding space for “I don’t know” is a form of epistemic hygiene.

The Proportional Scrutiny Matrix

Extraordinary claims require extraordinary evidence. That’s Carl Sagan’s famous line, and it’s a practical rule of thumb. The formal matrix in the paper assigns more precise levels; here, we’re capturing the intuition:

  • Mundane claims (e.g., “it rained yesterday”) need only basic checking.

  • Important claims (e.g., “this medical treatment works”) demand a look at the methods.

  • Extraordinary claims (e.g., “aliens built the pyramids”) require a multi‑disciplinary audit—and even then, you’re allowed to stay sceptical.

This fights Lazy Thinking (the urge to accept the easiest answer) and Special Reasoning (applying one standard to yourself and another to others).

Three Mechanisms You Can Try

These are not prescriptions; they’re invitations. If they work for you, great. If they don’t, or if you find better ways, that’s valuable too. If you adapt or test these tools, sharing what you find—successes and failures—is part of the work.

1. Metacognitive Vaccines (Prebunking)

You know how vaccines work: expose the immune system to a weakened version of a virus so it learns to recognise and fight the real thing. Prebunking does the same for misinformation. By exposing yourself to a mild, harmless version of a flawed argument—and learning why it’s flawed—you build cognitive antibodies. This kind of prebunking has shown promise in misinformation research; here we extend it as a general cognitive habit.

Try it: next time you see a common logical fallacy (like false balance), name it and explain why it’s misleading. “That’s a false equivalence. The evidence isn’t 50‑50; one side has overwhelming support.” The more you do this, the quicker you spot it in the wild.

2. Neural Cross‑Training

Your brain is a network. If you always think in the same way—always analytical, always abstract, always emotional—you’re strengthening some paths while letting others grow over. Cross‑training means deliberately switching modes. Different modes recruit different neural systems; alternating them keeps any single shortcut from dominating.

  • Analytical mode: do a puzzle, check a source, map out the evidence.

  • Synthetic mode: look for patterns, connect ideas across domains, try to see the big picture.

  • Sceptical mode: ask “what would change my mind?”

Alternating between them keeps your cognitive landscape flexible and less prone to ruts.

3. Dopamine Rechanneling

Our brains reward us for things that feel good—including being right, being in the know, and being part of a tribe. That’s the Exclusivity/Superiority Factor at work. The reward is real, but it can be hijacked.

You can try to re‑channel that reward system by:

  • Reducing exposure to platforms designed to maximise outrage and certainty. You don’t have to quit social media, but noticing when you’re being pulled into a loop can help you step out.

  • Uncertainty reward priming: train yourself to feel curiosity—even pleasure—when you encounter something you don’t know. Instead of “I must decide now,” try “what an interesting puzzle.”

  • A small habit: keep a log of “things I changed my mind about” and treat adding to it as a win, not a failure. That trains your reward system to value updating over being right the first time.

A Note on What These Tools Are (and Aren’t)

The protocols and mechanisms described here are proposals. They are drawn from research on critical thinking, cognitive bias, and misinformation, but their specific application to the Neural Pathway Fallacy framework is a hypothesis. They haven’t been field‑tested in large‑scale trials. They’re offered as tools to try, not as proven solutions.

If they work for you, wonderful. If they don’t, or if you find better ways, that’s valuable information too. The spirit of the work is open, corrigible, and collaborative.

Go Deeper

This essay combines concepts from two formal papers. For the full models and research behind contagion and immunisation, see:

Like all papers in this series, these are formal hypotheses: simulation‑supported, not yet field‑validated. The tools described are proposed practices, not established treatments.

The next (and final) bridge essay will step back to ask: what do we know so far? What’s still uncertain? And what kind of covenant might we make to keep this work honest, open, and useful?

End of Bridge Essay 3


Recent Posts

See All

Comments


bottom of page