Appendices A & B: Python Methods Companion & Cultural Calibration Decision Tree
- Paul Falconer & ESA

- 7 hours ago
- 6 min read
Authors: Paul Falconer, ESAsi
Series: NPF/CNI Canonical Papers
License: CC0 1.0 Universal
OSF DOI: 10.17605/OSF.IO/C6AD7
Download PDF: Appendices A & B PDF (OSF)
Appendix A: Python Methods Companion – NPF/CNI Calculation and Simulation
This appendix provides Python code for computing the Neural Pathway Fallacy (NPF) score, normalising raw scores, and calculating the Composite NPF Index (CNI). The code is presented as a theoretical implementation; it is not a validation tool. All functions are designed to be readable and auditable. Simulation parameters from Paper 5 are included for reproducibility.
1. NPF Score Calculation
The raw NPF score is computed from six cognitive factors (0–1 scale) and the logarithmic time and exposure modifiers.
python
import math
def npf_raw(LT, SR, NP, SE, ET, ESF, t, e):
"""
Calculate raw NPF score for a single belief.
Parameters:
LT, SR, NP, SE, ET, ESF : float (0–1)
Cognitive factor scores.
t : int
Days since belief activation.
e : int
Number of exposures to reinforcing content.
Returns:
float : raw NPF score (theoretical range ~0–208)
"""
weighted_sum = (0.2*LT + 0.2*SR + 0.15*NP + 0.15*SE + 0.1*ET + 0.2*ESF)
TF = 1 + math.log10(1 + t)
EF = 1 + math.log10(1 + e)
return weighted_sum * 10 * TF * EFExample:
python
score = npf_raw(0.9, 0.8, 0.7, 0.6, 0.5, 0.9, 1095, 1095)
print(f"Raw NPF: {score:.1f}") # approx 124.82. Normalisation
Raw scores are normalised to a 0–1 scale before interpretation or CNI aggregation.
Linear Normalisation
python
def normalise_linear(raw_scores, max_raw=200, min_raw=0):
"""
Linear normalisation to [0,1].
Parameters:
raw_scores : list or array
Raw NPF scores.
max_raw, min_raw : float, optional
Theoretical or empirical range. Default max 200 (approximate ceiling).
Returns:
list : normalised scores
"""
if max_raw == min_raw:
return [0.5] * len(raw_scores)
return [(x - min_raw) / (max_raw - min_raw) for x in raw_scores]Sigmoid Normalisation (with Cultural Parameter k)
python
import numpy as np
def normalise_sigmoid(raw_scores, k=1.5):
"""
Sigmoid normalisation using dataset median and standard deviation.
Parameters:
raw_scores : list or array
Raw NPF scores.
k : float
Steepness parameter. Recommended: 1.5 for individualist cultures,
0.8 for collectivist contexts.
Returns:
list : normalised scores
"""
median = np.median(raw_scores)
std = np.std(raw_scores)
if std == 0:
return [0.5] * len(raw_scores)
z = (np.array(raw_scores) - median) / std
return (1 / (1 + np.exp(-k * z))).tolist()3. Composite NPF Index (CNI)
The CNI is a weighted sum of normalised NPF scores with weights normalised to sum to 1.
python
def cni(normalised_scores, weights):
"""
Compute CNI from normalised scores and weights.
Parameters:
normalised_scores : list
Normalised NPF scores (0–1).
weights : list
Centrality weights. They will be normalised to sum to 1.
Returns:
float : CNI (0–1)
Raises:
ValueError: if lengths of inputs differ.
"""
if len(normalised_scores) != len(weights):
raise ValueError("normalised_scores and weights must have the same length")
w_sum = sum(weights)
if w_sum == 0:
return 0.0
norm_weights = [w / w_sum for w in weights]
return sum(s * w for s, w in zip(normalised_scores, norm_weights))Example (linear normalisation, equal weights):
python
raw = [80, 70]
norm = normalise_linear(raw, max_raw=200)
cni_val = cni(norm, [0.5, 0.5])
print(f"CNI: {cni_val:.3f}") # 0.3754. Simulation Parameters (Paper 5)
The internal consistency checks in Paper 5 used the following simulation parameters:
Time steps: 100 simulated days per run.
Exposure schedule: Daily exposure to reinforcing content for first 30 days, then random.
Cognitive factor generation: Random values between 0.2 and 0.8, with small increments for each exposure.
“Ground truth” entrenchment: Defined by a separate simulation model based on Hebbian learning and striatal reinforcement.
Confidence calculation: 77% of trajectories where NPF formula predicted direction and approximate magnitude of entrenchment.
The full simulation code is available in the OSF repository under simulation/npf_simulation.py. It can be run with Python 3.8+ and requires numpy and pandas.
5. Reproducibility Notes
All functions assume input factors are in the [0,1] range; no clipping is applied. In empirical work, any clipping or rescaling must be explicitly reported.
For sigmoid normalisation, the function uses the dataset’s median and standard deviation; ensure your sample size is adequate (≥5 beliefs recommended).
When using linear normalisation with the default max_raw=200, note that the theoretical maximum raw NPF is ~208 (as derived in Paper 1). The conservative ceiling of 200 is a practical choice.
Appendix B: Cultural Calibration Decision Tree – Selecting the Sigmoid Steepness Parameter kk
This appendix provides a decision framework for choosing the steepness parameter kk in the sigmoid normalisation of NPF scores (Paper 2, Section 4.2). The choice of kk affects how strongly the normalisation compresses the raw score range. The default recommendations are derived from cultural psychology literature and are provisional; they have not been empirically validated within the NPF framework.
1. Background
The sigmoid normalisation is defined as:
NPF_tilde = 1 / (1 + e^(-k * (NPF_raw - median_NPF) / sigma_NPF))
The steepness parameter kk determines how quickly the normalised score transitions from 0 to 1 as raw scores move away from the median. A higher kk produces a steeper curve, compressing the mid‑range and making scores near the median more extreme after normalisation. A lower kk produces a flatter curve, preserving more variation across the raw score range.
2. Decision Tree
text
┌─────────────────────────────────────────┐
│ What is the cultural context of the │
│ population being assessed? │
└─────────────────────────────────────────┘
│
▼
┌─────────────────────┐
│ Individualist │ → Use k = 1.5
│ (e.g., US, UK, │
│ Western Europe) │
└─────────────────────┘
│
▼
┌─────────────────────┐
│ Collectivist │ → Use k = 0.8
│ (e.g., China, Japan,│
│ many Latin American│
│ and African │
│ societies) │
└─────────────────────┘
│
▼
┌─────────────────────┐
│ Mixed / uncertain │ → Run sensitivity analysis
│ or cross‑cultural │ with k = 0.8, 1.0, 1.2, 1.5
│ sample │ and report CNI ranges
└─────────────────────┘3. Rationale for Default Values
k=1.5k=1.5 (individualist cultures)Individualist societies often exhibit greater variability in belief expression and stronger polarisation. A steeper sigmoid is hypothesised to better capture the higher salience of ideological divisions, compressing moderate scores into more differentiated categories.
k=0.8k=0.8 (collectivist cultures)Collectivist societies tend to value harmony and may show more moderated belief expression. A flatter sigmoid is hypothesised to preserve finer distinctions in the middle of the range, reflecting less extreme polarisation.
Important note: These national labels are broad generalisations. Within‑country variation can be as large as between‑country variation. Researchers should use these defaults as starting points and, where possible, justify their choice of kk with information about the specific population (e.g., region, subculture, community).
4. Sensitivity Analysis Example
If the cultural context is mixed or uncertain, compute CNI for multiple kk values and report the range. The following table is illustrative only.
kk | CNI |
0.8 | 0.72 |
1.0 | 0.74 |
1.2 | 0.76 |
1.5 | 0.78 |
In this case, the CNI is robust across plausible kk values, so the choice does not materially affect interpretation. If the range were wider, the uncertainty should be noted in the interpretation.
5. Operationalisation in Code
The sigmoid normalisation function in Appendix A includes a k parameter that can be set accordingly.
python
normalised = normalise_sigmoid(raw_scores, k=1.5) # individualist context
normalised = normalise_sigmoid(raw_scores, k=0.8) # collectivist context6. Future Work
The cultural calibration of kk is an open empirical question. Cross‑cultural studies that collect NPF scores and validate CNI thresholds against behavioural outcomes are needed to refine these recommendations.
References
(See Papers 1–6 for full reference list.)
Cite as
Falconer, P., & ESAsi. (2025). Appendices A & B: Python Methods Companion & Cultural Calibration Decision Tree. OSF Preprints. 10.17605/OSF.IO/C6AD7
End of Appendices A & B
Comments