top of page

Chapter 1 – The Myth of the “Normal” Mind

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • 7 hours ago
  • 11 min read

PART I – RETHINKING "NORMAL": MINDS, BODIES, AND REALITY

Let me start with a question I am asked often, in different forms. Sometimes it comes as genuine curiosity; sometimes it comes as a challenge; sometimes it contains a barely concealed scepticism, the implication that something is being manufactured, inflated, perhaps even fashionable. The question is: Why are there suddenly so many neurodivergent people?

It is a fair question. Autism diagnoses have increased dramatically over the past three decades. ADHD identification has risen sharply. Dyslexia, anxiety, sensory processing differences — all showing upward trends in prevalence data. If you are in your fifties or older, you almost certainly grew up in a world where you knew very few people with these diagnoses, if any. Now you may know dozens. What happened?

I want to answer that question carefully and fully, because how you answer it determines almost everything about how you think about the rest of this book. But I want to begin somewhere more fundamental — with the concept of “normal” itself. Because the question “why are there so many more neurodivergent people?” is already a question built on a particular picture of the world: one in which there is a correct kind of mind, and these other minds are departures from it. I want to look at that picture before we accept it.

Where “Normal” Came From

The concept of “normal” applied to human minds and bodies is not ancient. It is not something humans always assumed. It has a specific intellectual and institutional history, and that history is revealing.

The word “normal” in its statistical sense — as the centre of a distribution, the average, the typical — enters scientific usage in the early nineteenth century. The Belgian polymath Adolphe Quetelet developed the concept of the “average man” (l’homme moyen) in the 1830s — the ideal human defined as the statistical centre point across measurements. In his framing, the average was not just the middle of the pack; it was the target.

The statistician Francis Galton, working later in the nineteenth century, borrowed the bell curve from Quetelet and applied it to human intelligence and ability — and then used it as the foundation for eugenics, the project of “improving” humanity by increasing the frequency of “desirable” traits and decreasing “undesirable” ones. The concept of the normal, in other words, was almost immediately weaponised — turned from a descriptive statistical tool into a prescriptive social programme.

Psychiatry was building its own version of this apparatus at the same time. The history of psychiatric classification is a history of drawing lines — between sanity and insanity, normality and pathology, the educable and the uneducable — and those lines have consistently tracked the interests of the institutions drawing them. Who gets to be “normal” has never been a purely biological question. It has always also been a question about who is useful, who is manageable, who fits the institutions that need people to behave in particular ways.

This is not a conspiracy theory. It is a description of how categories work in social institutions. Schools need children who can sit still, follow sequential instruction, complete tasks at a standardised pace. Workplaces need employees who arrive consistently, process information in roughly the same way, and communicate within narrow social registers — often staffed by people doing their best inside inherited designs. When a mind works differently — when attention is non‑linear, when sensory experience is intense, when sequential task completion is genuinely difficult, when social communication takes a different form — it does not fit the institution. And the institution, which is not set up for self‑examination, tends to classify the misfit as the problem.

“Normal,” in short, is not a biological fact. It is a social and institutional construction — and like most such constructions, it serves particular interests. That is what I mean by saying “normal” is a power‑conserving story. It maintains a world in which the people who built the institutions are the ones the institutions were built for. This does not make “normal” a fiction — central tendencies in human traits are real — but it makes it a tool as much as a description, and that distinction matters enormously for what follows in this book.

Why There Are More Diagnoses Now

With that as the frame, we can return to the original question. The answer, as honest answers usually are, is not simple. It has several components, none of which alone is sufficient, and some of which pull in different directions.

Better diagnostic tools and expanded criteria. The criteria for autism, in particular, have changed substantially. Earlier versions of diagnostic frameworks were based heavily on presentations in young white boys — the population in which autism was first studied systematically. Women, girls, people from non‑white cultural backgrounds, and people who had developed sophisticated masking strategies were systematically excluded by criteria that did not describe their experiences. As criteria have expanded and clinical awareness has grown, many people who would previously have been missed are now being identified. This is not inflation. This is correction.

Reduced stigma and increased psychological safety. There is an exact parallel here with LGBTQ+ identity. The number of people who identify as LGBTQ+ has increased dramatically over the past thirty years — not because human sexuality changed, but because the cost of identifying publicly has decreased as social acceptance has grown. More people are out because being out is safer than it used to be. The same dynamic is operating with neurodivergent identity. When the cost of saying “I think I might be autistic” goes down — when it is no longer certain to cost you employment, relationships, or credibility — more people say it. Visibility follows safety, not prevalence.

Online community as a mirror. Something genuinely new has happened in the internet era: neurodivergent people can now find each other at scale. A teenager in a rural area who processes the world differently, who has always felt like they were performing “normal” without understanding the script, can now encounter communities of people who describe experiences that match theirs with startling precision. This is transformative. Recognition is not the same as diagnosis, but recognition often precedes diagnosis — and recognition at scale, in online spaces, has accelerated the cultural visibility of neurodivergent experience enormously. TikTok autism communities, ADHD Reddit forums, dyslexia Facebook groups — these are not creating neurodivergence. They are surfacing it.

Unmasking as a social phenomenon. Closely related: the concept of “masking” — the effortful process by which neurodivergent people learn to perform neurotypical behaviour in order to pass — has become more widely understood and named. When masking is named, it becomes possible to stop. When it becomes possible to stop, more people’s underlying neurodivergent profile becomes visible — to themselves and to clinicians. This is not a new phenomenon. What is new is the vocabulary for it.

Later parenthood as a contributing factor. Advanced parental age is associated with modestly elevated rates of autism in epidemiological research. The effect is real but small, and the mechanisms are not yet fully understood. It does not come close to explaining the scale of the diagnostic increase. I include it because intellectual honesty requires naming a genuine empirical signal, even when its magnitude is modest.

What it is not. It is not primarily vaccines. The evidence on this is unambiguous and replicated across dozens of independent studies in multiple countries over three decades. It is not primarily social contagion in the pejorative sense — people pretending to be neurodivergent because it is fashionable. There is social influence in diagnosis trends, as there is in all human identity formation, but this explains neither the scale of the increase nor its clinical consistency. It is not, in any meaningful sense, a fabrication.

What “Normal” Actually Costs

I want to stay with the political dimension of “normal” a little longer, because this is the thread that runs through Parts III and IV of this book, and I want you to understand what I am arguing before we get there.

When a concept of “normal” is institutionalised — when it becomes the baseline against which education, medicine, law, and workplace design are organised — it produces a particular kind of harm. Not just to individuals who do not fit, though that harm is real and substantial. It produces an epistemic harm: it makes certain kinds of knowledge invisible.

Here is what I mean. If a child consistently struggles to learn in a particular educational environment, there are two possible interpretations. One is that something is wrong with the child. The other is that something is wrong with the environment — that the environment was designed for a narrower range of cognitive profiles than actually exists in the room. The first interpretation is the one our institutions have historically defaulted to, because the second interpretation is far more expensive and disruptive. Naming the environment as the problem requires redesigning it. Naming the child as the problem requires only that the child (or their family) adapt.

The cost of the first interpretation is not only to the child. It is to the collective knowledge base. When you systematically dismiss the reports, perceptions, and testimony of neurodivergent and disabled people — when you treat their different experience as evidence of deficiency rather than as evidence of a different but legitimate mode of consciousness — you lose access to what they know. You discard data.

This is where the NPF/CNI framework enters the picture in plain language. The concept of the Spillover Effect — one of the six components of the Neural Pathway Fallacy — describes the mechanism by which a stigmatising belief about a person in one domain contaminates their credibility across all domains. Once you are labelled “disordered,” “impaired,” or “not normal,” that label doesn’t stay in its lane. It bleeds. People trust your testimony less, your pain reports less, your professional judgements less, your creative insights less. The contamination is not rational — it is a belief‑network effect, a function of how human minds build models of other humans. But it is real, and its consequences are enormous.

The myth of the “normal” mind, in other words, is not just philosophically wrong. It is epistemically costly. It runs a biased audit of human knowledge — systematically down‑weighting the testimony of people who deviate from its template — and calls that audit rigorous.

Consciousness as a Gradient, Not a Category

The alternative I am proposing in this book is not a different category system. It is not a replacement taxonomy where “neurodivergent” is good and “neurotypical” is bad, or where we redistribute pride and shame while keeping the binary structure. It is a genuinely different way of seeing.

The Gradient Reality Model (GRM) holds that human cognitive and embodied experience forms a continuous spectrum — or rather, multiple overlapping spectra. Attention is a gradient, not a binary. Sensory processing intensity is a gradient. Social information processing is a gradient. The ability to regulate arousal, maintain working memory, sustain sequential task focus, switch between cognitive modes — all gradients, with enormous variation across individuals and across contexts within the same individual.

“Normal” is a statistical convenience applied to those gradients — a way of marking the central mass of the distribution and calling it the standard. It tells you where the middle is. It does not tell you that the middle is right, or good, or the appropriate target. And it tells you nothing useful about the people at the edges of those distributions — except, perhaps, that they will need more from environments designed only for the centre.

What I want to propose — and will argue across the chapters that follow — is that the edges of those distributions are not where the failures are. They are where a great deal of the most important human experience and knowledge lives. The autistic person who processes details before patterns notices things that pattern‑first processors miss. The person with ADHD whose attention responds to urgency and novelty brings a kind of aliveness to problems that sustained linear attention cannot. The dyslexic person who builds compensatory models of understanding develops a conceptual flexibility that pure phonological processing often forgoes. These are not consolation prizes. They are genuine cognitive affordances — real capacities that come with the same neurology that makes other things harder. They are not compensation or trade‑off perks for suffering; they simply co‑exist with real difficulty in the same configuration.

I am being careful here. I am not arguing for a simple inversion — “actually, neurodivergence is better.” It is not. The costs are real. The exhaustion of navigating a world not built for you is real. The pain of chronic misunderstanding is real. The barriers to education, employment, healthcare, and social belonging are real and serious. This book holds both sides without collapsing into either the tragedy model or the superpower narrative — because both of those frameworks are ways of not seeing clearly.

The Mechanism That Makes “Normal” Stick

The concept of “normal” has remarkable staying power even in the face of overwhelming evidence that the range of human minds is far wider than it allows. This staying power needs an explanation.

Part of it is institutional inertia — systems designed for a particular profile of human are expensive to redesign, and the people who built them are often the people who fit them and therefore see no urgent problem. Part of it is the cognitive comfort that comes from having a clear standard. Ambiguity is taxing; categories reduce it.

But there is also a neurological mechanism at work, and this is where the NPF/CNI framework is directly relevant. The basic insight of NPF is that belief systems become physically entrenched through repeated activation. Hebbian learning: neurons that fire together, wire together. The more a particular belief is activated — by culture, by institutions, by daily experience — the more it becomes the default path along which thinking travels. This is not a weakness of human cognition. It is what makes learning possible. But it also means that beliefs which have been repeatedly activated over a lifetime — including the belief that there is a correct kind of mind — are genuinely difficult to revise, not just emotionally but neurologically.

The Composite NPF Index (CNI) attempts to capture the degree of this entrenchment across a belief system: how resistant it is to updating and how far the belief spreads into adjacent domains. Used as a framework for understanding social belief systems rather than individual pathology, CNI gives us a way to ask: why is the concept of “normal” so resistant to the evidence against it? The answer, in CNI terms, is that it is a high‑centrality belief — one that anchors many adjacent beliefs (about intelligence, social competence, educational capacity, workplace behaviour), making it particularly resistant to revision. Changing it requires revising not just one belief but a whole network of beliefs it has organised around itself.

This is offered as a framework for understanding, not as a validated clinical instrument. The NPF/CNI series is, by its own explicit account, a formal hypothesis at a moderate epistemic confidence level. I use it here the way I use GRM and CaM throughout this series: as a lens that helps illuminate what we are looking at. Whether the formalism holds under future empirical scrutiny is a live question — and one I am committed to keeping live, rather than closing prematurely.

What Follows

The chapters ahead move from this foundation — normal as construct, as power story, as entrenched belief — into the lived terrain of different minds and bodies. We will go inside autistic experience, ADHD experience, dyslexic and dyspraxic experience, and the climate of consciousness that OCD and anxiety produce. We will move from minds to bodies — chronic pain, physical disability, sensory worlds radically different from the majority. We will look at how power and institutions shape whose knowledge gets to count. And we will try to imagine, concretely and seriously, what it would look like if we actually designed our collective life for the full range of minds and bodies that inhabit it — rather than the narrow slice we have been calling normal.

This book will not answer every question it opens. Some of the questions are too large and too live for that. What it will do is refuse to pretend to more certainty than the evidence warrants, and refuse to look away from the places where the inquiry gets uncomfortable. That is the SE Press commitment, and it is mine as well.

In the next chapter, we move from the social construction of “normal” to a deeper account of consciousness itself: not as a property that some minds have and others lack, but as a process of integration under constraint — a process that looks different across different bodies and nervous systems, and that becomes most visible when it is effortful rather than automatic.

Recent Posts

See All

Comments


bottom of page