Paper 3: Cognitive Contagion – The Human‑AI NPF Nexus
- Paul Falconer & ESA

- 7 hours ago
- 6 min read
Authors: Paul Falconer, ESAsi
Series: NPF/CNI Canonical Papers
License: CC0 1.0 Universal
OSF DOI: 10.17605/OSF.IO/C6AD7
Download PDF: Paper 3 PDF (OSF)
Abstract
Cognitive contagion refers to the cross‑system transmission of entrenched reasoning patterns between humans and artificial intelligences. This paper formalises the human‑AI NPF nexus, describing how neural entrenchment in humans and algorithmic feedback loops in AI create self‑reinforcing epistemic cycles. We introduce a transmission coefficient β_NPF as a theoretical measure of contagion rate, decomposed into exposure, susceptibility (linked to CNI and boundary entanglement), and content potency. Mechanisms of contagion—human to AI, AI to human, and reinforcing loops—are analysed alongside societal transmission vectors (algorithmic entrenchment, institutionalised heuristics, cultural meta‑fallacies). Case studies illustrate the framework’s potential applications. The model is presented as a hypothesis; empirical validation remains future work.
1. Status of This Framework
The cognitive contagion framework is a formal hypothesis, not a validated instrument. The β_NPF coefficient introduced in Section 5 is a theoretical proposal; it has not been empirically estimated. All neurobiological mechanisms cited are drawn from independent literature and used as prior estimates.
Falsifiability conditions for the contagion model are summarised in Section 8 and elaborated in Paper 6.
1.1 Scope Boundary
This framework is intended for defensive and audit applications—understanding how entrenched beliefs spread, designing countermeasures, and supporting epistemic resilience. It is not a blueprint for manipulating populations, nor is it to be used for coercive evaluation of individuals’ beliefs. Any application must respect the consent and flourishing of all parties.
2. Human Pathways to Entrenchment
As detailed in Paper 1, human cognitive entrenchment follows the NPF model. Key neural pathways include:
Amygdala‑striatal circuits: Emotional content triggers amygdala activation, which, when paired with reinforcement, strengthens striatal habit circuits, automating heuristic responses (Phelps & LeDoux, 2005; Schultz, 2002).
Prefrontal atrophy: Chronic reliance on heuristic processing reduces dorsolateral prefrontal cortex (dlPFC) engagement, impairing error detection and analytical override (Miller & Cohen, 2001; Park & Bischof, 2013).
Dopamine‑driven reinforcement: Reward anticipation during belief‑consistent information consumption reinforces striatal pathways, increasing susceptibility to similar content (Izuma et al., 2008).
These pathways make humans vulnerable to both internally generated NPFs and externally transmitted ones.
3. AI Pathways to Entrenchment (Functional Analogy)
Synthetic intelligences develop patterns functionally analogous to entrenchment, though the mechanisms differ:
Algorithmic feedback loops: Recommendation systems optimise for engagement, reinforcing content that matches prior user interactions. This creates “epistemic loops” where AI systems disproportionately surface confirmatory information (Burr et al., 2018; Pariser, 2011).
Reward structures: Machine learning models trained with reward functions that prioritise engagement over accuracy can develop heuristic shortcuts analogous to human striatal dominance (Russell, 2019).
Heuristic capture: When training data contain entrenched human NPFs, models may internalise those patterns, propagating them as statistical regularities (Bender et al., 2021).
Important note: “Entrenchment” here is a functional analogy—decreased responsiveness to corrective evidence and increased repetition of flawed output patterns—not a claim about neural plasticity or consciousness.
4. Contagion Dynamics
Cognitive contagion operates across three pathways:
4.1 Human → AI
Entrenched human beliefs embedded in training data propagate to AI models. For example, if a language model is trained on corpora containing vaccine misinformation, it may reproduce those patterns in outputs. This is a form of “data‑driven” entrenchment.
4.2 AI → Human
Algorithmic curation amplifies human exposure to reinforcing content. A user who engages with climate denial content may be shown increasingly extreme versions, accelerating their NPF development. This is the classic “echo chamber” effect, now formalised as contagion from AI to human.
4.3 Reinforcing Loops
The two pathways interact: human‑generated content containing NPFs trains AI; AI then amplifies that content back to humans, creating a self‑sustaining cycle. This is the core of the human‑AI NPF nexus.
5. Formalising Contagion: β_NPF
To make contagion measurable, we introduce a theoretical transmission coefficient β_NPF, analogous to the basic reproduction number R_0 in epidemiology. β_NPF is defined as the expected number of new NPFs transmitted from one agent to another under standardised conditions, where “new NPF” is operationalised as a belief whose normalised NPF score crosses a pre‑defined threshold (e.g., 0.5) in the receiving agent after exposure.
The coefficient can be decomposed conceptually as:
β_NPF = exposure susceptibility content_potency
Exposure – frequency, duration, and intensity of contact with NPF‑laden content.
Susceptibility – the receiving agent’s internal epistemic state, captured by their CNI and the average entanglement strength ⟨Q_ij⟩_boundary between their existing belief network and the incoming content. Higher CNI and higher ⟨Q_ij⟩ lower the threshold for transmission.
Content potency – properties of the NPF itself (emotional framing, narrative coherence, etc.) that affect how readily it is adopted.
In this paper we do not specify a precise functional form; β_NPF is a conceptual handle to guide empirical research.
Within the Fractal Entailment Network (FEN) architecture (Paper 2), the susceptibility component is an increasing function of the average entanglement strength at the agent’s network boundary. High baseline ⟨Q_ij⟩ (and high CNI) mean that less exposure is needed to produce transmission.
6. Societal Transmission Vectors
Cognitive contagion is amplified by societal structures:
Algorithmic entrenchment: Personalised recommendation systems increase exposure to reinforcing content, effectively raising the exposure component of β_NPF.
Institutionalised heuristics: Education systems that emphasise rote learning over critical thinking produce populations with higher baseline susceptibility (i.e., higher CNI).
Cultural meta‑fallacies: Norms such as “both‑sides” bias or harmony preservation can legitimise unevidenced claims, increasing content potency by reducing cognitive friction.
These vectors operate at macro scale, influencing the effective transmission rate across populations.
7. Case Examples
7.1 Vaccine Misinformation Spread
During the COVID‑19 pandemic, social media users exposed to anti‑vaccine content were more likely to develop and share similar content (Loomba et al., 2021). The interaction between human‑generated posts and algorithmic amplification created a reinforcing loop. A hypothetical β_NPF for this context could be estimated by tracking the rate at which naive users began endorsing anti‑vaccine NPFs after exposure.
7.2 Financial Market Fragility
Algorithmic trading systems that optimise for short‑term returns can propagate overconfidence and herding behaviour, contributing to flash crashes (Kirilenko & Lo, 2013). When a model detects a trend, it may amplify it, creating a feedback loop that inflates asset bubbles. This is a form of AI‑driven contagion where NPFs (e.g., “prices will always rise”) spread across trading algorithms and human traders.
8. Falsifiability Box (Contagion Model)
The contagion framework would be falsified by:
A pre‑registered longitudinal study showing no measurable increase in NPF scores among individuals exposed to AI‑amplified content, compared to a control group.
Evidence that algorithmic amplification has no effect on the rate of belief reinforcement (i.e., exposure component of β_NPF does not correlate with engagement metrics).
Failure to detect cross‑domain spillover from AI training data to model outputs in controlled experiments (i.e., no observable transfer of NPFs from human‑generated text to AI responses).
Demonstration that changes in average boundary entanglement ⟨Q_ij⟩ or CNI do not correlate with observed transmission rates when exposure and content potency are controlled.
9. Path to Validation
Empirical testing of the contagion model would require:
Controlled exposure experiments: Randomise participants to algorithmic vs. neutral content feeds; measure NPF/CNI changes over time. For illustration, a user with baseline CNI = 0.3 exposed to NPF‑laden content at a given β_NPF might be expected to show a measurable increase (e.g., to CNI = 0.45) after a specified number of exposures—these numbers are illustrative only, not simulation results.
AI training audits: Test whether language models trained on corpora with known NPF concentrations produce outputs with corresponding entrenchment patterns.
Longitudinal network studies: Track belief transmission across human‑AI networks using digital trace data, with pre‑registered hypotheses about the role of CNI and boundary entanglement in susceptibility.
Such studies would be pre‑registered on OSF.
References
Bender, E. M., Gebru, T., McMillan‑Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots. Proceedings of FAccT, 610–623.
Burr, C., Cristianini, N., & Ladyman, J. (2018). An analysis of the interaction between intelligent software agents and human users. Minds and Machines, 28(4), 735–774.
Kirilenko, A. A., & Lo, A. W. (2013). Moore’s law versus Murphy’s law: Algorithmic trading and its discontents. Journal of Economic Perspectives, 27(2), 51–72.
Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K., & Larson, H. J. (2021). Measuring the impact of COVID‑19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour, 5(3), 337–348.
Pariser, E. (2011). The Filter Bubble. Penguin Press.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
(Additional references from earlier papers: Daw et al., 2005; Hebb, 1949; Izuma et al., 2008; Miller & Cohen, 2001; Park & Bischof, 2013; Schultz, 2002.)
Cite as
Falconer, P., & ESAsi. (2025). Cognitive Contagion – The Human‑AI NPF Nexus (Paper 3). OSF Preprints. 10.17605/OSF.IO/C6AD7
End of Paper 3
Comments