Sci-Comm Essay 1 - The Investment That Felt Right: How Our Brains Build Belief Networks
- Paul Falconer & ESA

- 4 hours ago
- 7 min read
Alex didn’t think of it as a big decision at first.
It started with a video in a finance subreddit: a charismatic presenter explaining why a particular tech stock was “about to explode.” The story was smooth. The charts looked convincing. There was talk of disruption, early adopters, and how “the institutions don’t get it yet.”
Alex had just received a small inheritance. Not enough to retire on, but enough to matter. The idea of turning it into something more—of finally getting ahead—felt exciting. The video felt right.
What Alex didn’t notice was that the brain was already starting to carve a path.
The First Shortcuts
Over the next week, Alex watched more videos from the same creator, then from others who told similar stories. Each one was a little different, but the core message was the same: this asset was special, the old rules didn’t apply, and those who acted boldly would be rewarded.
It was easy to follow. No dense reports, no complicated models—just narratives and charts with arrows pointing up.
This is Lazy Thinking (LT) in the Neural Pathway Fallacy framework: the brain’s preference for the path of least resistance. It’s not stupidity; it’s efficiency. The first explanation that feels satisfying tends to become the default.
When a friend suggested looking at more mundane options—index funds, diversified portfolios—Alex felt bored. “Everyone does that,” Alex thought. “That’s how you stay stuck.”
The brain was learning: this story feels good. Keep walking it.
Special Rules for Me
A week later, over coffee, Alex brought up the investment with a colleague, Dana, who had a background in finance.
“What’s the company’s cash flow like?” Dana asked. “How are they valued compared to peers?”
Alex didn’t know. “The point is, they’re about to disrupt the whole sector,” Alex said. “Traditional metrics don’t really apply.”
Later that night, Alex watched another video from the favourite creator. When that person mentioned “traditional analysts don’t understand this space,” Alex nodded along. It made sense.
The rut deepened: sceptical of those who disagreed, relaxed with those who confirmed.
“I’m Just Keeping an Open Mind”
On a forum, someone posted a long critique of the investment: concerns about revenue, competition, regulatory risk. Alex read the first paragraph, then scrolled to the comments.
“Classic FUD,” one reply said—fear, uncertainty, doubt. “We don’t know everything yet. You have to keep an open mind.”
Alex liked that comment.
This is the Neutral Pathway (NP) factor. It presents unevidenced or weakly evidenced speculation as if it’s just another reasonable option in the mix. It sounds fair: “just asking questions,” “just being open.” But it quietly blurs the line between claims that are well‑supported and those that aren’t.
Alex told a friend, “I’m not saying this investment is guaranteed. I’m just open to the possibility.” It felt reasonable. It also made it easier to keep walking the same path.
Building a Belief Network
Over time, Alex’s beliefs began to cluster.
“Traditional finance is rigged.”
“Only early adopters really win.”
“If I do what everyone else does, I’ll stay stuck.”
“People who criticise this just don’t get it.”
These weren’t isolated opinions; they supported each other. Doubt about the investment started to feel like doubt about Alex’s ability to see what others couldn’t.
In the NPF/CNI framework, this is how belief networks form. Individual ruts—lazy acceptance here, double standards there, “open‑minded” speculation elsewhere—link up into a web. A change in one belief tugs on others.
The model calls this interplay cognitive synergy and ideological scaffolding: some beliefs become foundational beams, others lean on them for support.
In Alex’s case, one deep belief—“the system is rigged, so I must break the rules to win”—began to hold up the whole structure.
When Spillover Kicks In
The habit of thinking didn’t stay confined to this one stock.
When Alex’s employer offered a matching contribution to a conservative retirement plan, Alex scoffed. “Why would I lock my money in a slow, average fund when there are real opportunities out there?” The same distrust applied.
When a relative suggested paying down debt before investing in volatile assets, Alex felt irritated. “You’re stuck in old ways of thinking,” Alex thought. “You don’t understand this new economy.”
This is Spillover Effect (SE): patterns learned in one domain (distrusting “traditional” advice, valuing contrarian stories) start to colour other decisions. A shortcut becomes a general habit.
From the outside, it might look like Alex was making one risky investment. Inside, a whole network of beliefs was becoming more entrenched.
The Composite NPF Index (CNI), Without the Math
In the technical papers, this tightening network is described with a proposed measure: the Composite NPF Index (CNI). It’s a way of summarising, with a number between 0 and 1, how entrenched a belief network has become.
A low CNI (around 0.2) corresponds to a loose, flexible network. Beliefs can be updated without everything feeling at stake.
A medium CNI (around 0.5) means some defensiveness: new evidence is uncomfortable but can still get through.
A high CNI (around 0.8) describes a self‑sealing network. Evidence that threatens the core feels like an attack—not just on an idea, but on identity.
In the NPF/CNI framework, CNI is a hypothesis: a proposed way to quantify network entrenchment, tested in simulations but not yet validated in field studies. The point here isn’t the exact number. It’s the idea that networks can tighten, and that tightness matters.
For Alex, the network was moving up that scale.
The Crack Appears
A few months later, the market turned.
At first, the stock drifted down. “Normal volatility,” Alex told friends. “Shaking out weak hands.” The online community said the same. The stories updated, but the core stayed: “This is fine.”
Then a negative earnings report hit. The price dropped harder. Some early promoters quietly moved on to other topics. A long‑time forum member posted a detailed thread explaining why they were selling: cash burn, missed targets, competitive pressure.
Alex felt a jolt. This wasn’t a troll or an outsider; this was someone from inside the tribe.
At a family dinner, Alex’s older cousin—who’d invested conservatively for decades—asked a simple question.
“Can you walk me through why you believe this is a good investment? Not the vibe. Just the reasons and the sources.”
Alex tried. Mid‑sentence, it became clear how much of the case rested on repeated phrases, not concrete numbers.
The ground shifted. The path that had felt so solid started to look less like a road and more like a story.
Enter the Binary Belief Protocol
That night, Alex opened a notebook and tried something new.
On one side of the page, Alex wrote down the main claims:
“This company is fundamentally undervalued.”
“Traditional metrics don’t apply here.”
“Institutional investors are asleep at the wheel.”
“This asset will massively outperform diversified funds.”
On the other side, Alex wrote three columns: Justified, Unjustified, Unknown.
This was a personal version of the Binary Belief Protocol: a discipline of sorting beliefs not into true/false, but into “currently justified,” “currently unjustified,” and “judgment suspended.”
For each claim, Alex tried to find the strongest available evidence:
Not another video, but an audited report.
Not a thread, but a prospectus.
Not a quote tweet, but a primary source.
Some claims had decent support. Many didn’t. A few turned out to be misunderstandings.
By the end of the exercise, Alex had quietly moved several beliefs into the “unjustified” or “unknown” columns.
The belief network loosened, just a little.
Proportional Scrutiny and Prebunking
Over the next weeks, Alex added another layer: Proportional Scrutiny.
For low‑impact decisions (“try a new café”), quick checks were fine.
For medium‑impact choices (“buy this gadget,” “take this short course”), Alex did a bit more.
For high‑impact claims (“this investment will 10x,” “traditional assets are doomed”), Alex now demanded multiple, independent sources and was willing to say, “I’m not convinced.”
This matched a simple rule: the bigger the claim, the stronger the evidence should be.
At the same time, Alex practiced a kind of self‑prebunking:
Noticing phrases like “everyone’s in on it,” “they don’t want you to know,” “this time is different.”
Labeling them, gently, as patterns that often show up in hype.
Naming the pattern made it harder for it to slip in unnoticed.
Cross‑Training and a New Reward
Alex also experimented with neural cross‑training—not in a lab, but in everyday habits.
Some days, Alex read company filings and basic guides to diversification. Slow, analytical work.
Other days, Alex zoomed out to consider broader patterns: personal goals, risk tolerance, time horizons.
Occasionally, Alex deliberately asked, “What would change my mind?” and wrote down thresholds (e.g., “If this misses earnings three quarters in a row, I will reduce my position”).
Different modes of thinking recruit different neural systems. Switching between them made it harder for any single shortcut to dominate.
Finally, Alex worked on dopamine rechanneling:
Instead of getting a rush from being “early” or “in on the secret,” Alex began to track something else: moments of updating.
Each time Alex changed a belief in light of better evidence, that went in a small wins log.
It felt awkward at first. But over time, there was a quiet satisfaction in being someone who could learn, not just defend.
Where NPF/CNI Comes In
Alex’s story is fiction, but the mechanisms it illustrates are what the NPF/CNI framework proposes:
Repeated patterns of poor reasoning—Lazy Thinking, Special Reasoning, Neutral Pathway, Spillover, Exploitation Techniques, Exclusivity/Superiority—can carve neural and behavioural ruts.
Those ruts link into belief networks, where some beliefs become foundational and others lean on them.
The Composite NPF Index (CNI) is a proposed way of summarising how entrenched such a network has become, tested in simulations but not yet field‑validated.
Protocols like the Binary Belief Protocol and Proportional Scrutiny Matrix, along with practices like prebunking, cross‑training, and dopamine rechanneling, are proposed tools for loosening those networks—pushing CNI down, making beliefs more responsive to evidence.
The framework is a hypothesis, not a finished science. But the core message is simple:
How we think, repeatedly, shapes the networks of belief that guide our lives. And with deliberate practice, those networks can change.
Alex didn’t become a perfect rational agent. But by noticing the ruts, questioning the scaffolding, and trying new paths, Alex began to build a different kind of confidence—not the thrill of being “right,” but the steadier confidence of being able to learn.
That’s the kind of confidence the NPF/CNI work is aiming to support.
This story is a fictional composite. The underlying principles reflect the NPF/CNI framework and its proposed cognitive immunity protocols.
End of Essay
Comments