Chapter 12 – Who Gets to Speak? Stigma and Credibility
- Paul Falconer & ESA

- 7 days ago
- 13 min read
PART IV – STIGMA, POWER, AND EPISTEMIC JUSTICE
This chapter is about who gets treated as a “real knower.”
Not in the abstract sense of who can, in principle, know things, but in the concrete sense of whose word counts in practice: whose account of their own pain is believed, whose report of a hostile workplace is taken seriously, whose interpretation of their own mind is treated as expertise rather than evidence of pathology. Neurodivergent and disabled people do not only face barriers of access and design. They face a systematic credibility gap. Their testimony is discounted not only about their conditions, but across domains, in ways that distort what families, institutions, and entire societies are able to know.
The claim of this chapter is deliberately strong. Dismissing neurodivergent and disabled voices is not only morally wrong. It is epistemically reckless. It throws away data from people whose position in the gradient of minds and bodies gives them access to aspects of reality that others cannot see clearly. A civilisation that systematically discounts those testimonies is not running a neutral knowledge‑gathering process. It is running a biased audit.
The Shape of Dismissal
Begin with the texture of everyday dismissal.
A woman with chronic pain reports that her medication is not working. She is told she is anxious, that she should exercise more, that perhaps stress is making it worse. Months or years later, a scan reveals structural damage that was there all along. An autistic adult explains that the open‑plan office is unworkable and asks for a different arrangement. They are told they are inflexible, not a “team player,” that “everyone finds it noisy sometimes.” A Deaf person arrives at a hospital and tries to explain that they need an interpreter. Staff speak louder, then slower, then give up and talk to the hearing friend instead.
In each of these cases, the content of what the person is saying is plausible. It is often later confirmed by events or by other observers. The problem is not that their testimony is unusually unreliable. It is that they are carrying identity markers — disabled, neurodivergent, mentally ill, chronically ill — that trigger a credibility downgrade before the content is even evaluated. Their words arrive already weighted less.
Sometimes the dismissal is overt: “you’re exaggerating,” “you’re overreacting,” “you’re too sensitive,” “it’s all in your head.” More often it is subtle and procedural: long delays before investigations, notes in files that flag “somatisation” or “non‑compliance,” meetings where concerns are politely acknowledged and then quietly ignored. Credibility is not snatched away in a single dramatic moment. It leaks away through a hundred small acts of not taking someone seriously.
The result is not only personal frustration and harm. It is informational loss. When the same pattern repeats across thousands or millions of interactions, entire categories of experience — certain forms of pain, burnout, sensory overload, executive collapse, autistic distress, Deaf communication needs — are systematically under‑represented in the data that institutions and societies use to make decisions.
The Spillover Effect: Stigma as a Contamination Mechanism
The NPF/CNI framework gives a way to describe what stigma does in these interactions.
At the core is the Spillover Effect. Once a person is marked by a particular label — “autistic,” “borderline,” “mentally ill,” “chronic pain patient,” “disabled,” “mad,” “neurotic” — that label does not stay neatly in its lane. It behaves like a dye released into water, spreading far beyond the domain it is supposed to describe. A note in a file that says “somatisation” can colour how every future report of symptoms is heard. A psychiatric diagnosis can bleed into judgements about parenting, employment, even criminal responsibility. An autism diagnosis can alter how a person’s competence is evaluated across domains where autism is not, on the face of it, relevant.
In NPF/CNI terms, stigma can be modelled as a high‑CNI prior plus a strong Spillover Effect: a story — “people with this diagnosis are unreliable witnesses” — becomes entrenched and resists disconfirming evidence, then spreads from one domain to another. This is a modelling language for a real pattern of distorted updating, not a claim that we have numerically measured “stigma coefficients” in the wild; as elsewhere in this series, the framework is hypothesis‑level, not yet field‑validated.
Once someone crosses the threshold into a stigmatised category, that prior behaves like a phase transition in the system’s perception: a high‑centrality belief snaps into place and begins to organise all subsequent data, in the same way the Gradient Reality Model (GRM) and the Spectral Gravitation Framework describe gradients with thresholds and “snap‑points” rather than perfectly smooth change. A doctor who has seen one patient exaggerate symptoms may begin, without noticing it, to treat all patients with similar labels as less credible. An employer who has had one neurodivergent staff member burn out may quietly downgrade the perceived reliability of all future neurodivergent hires. The prior jumps across people and contexts.
The crucial point is that this is not metaphorical contamination. It is a real cognitive and institutional mechanism. Once a label has attached, it changes how future data from that person is weighted. Reports that fit the stereotype (“they are anxious”) are accepted more readily. Reports that challenge it (“the medication is wrong,” “the environment is the problem, not me”) are treated with suspicion. The label has become a lens.
Epistemic Injustice: Wronged as a Knower
Philosophers of epistemic injustice have language for this pattern. Epistemic injustice is a distinctive kind of wrong done to someone in their capacity as a knower — they are harmed not just materially but as a person who can produce and share knowledge. In Miranda Fricker’s canonical account, it appears in two main forms: testimonial injustice and hermeneutical injustice.
Testimonial injustice occurs when someone’s word is given less credibility than it deserves because of who they are taken to be: prejudice causes a hearer to give a deflated level of credibility to a speaker’s word. Hermeneutical injustice occurs when there are no shared concepts or interpretive resources available to make sense of a certain kind of experience, leaving those who undergo it unable to render their lives intelligible in the dominant discourse.
Disabled and neurodivergent people face both. Their reports of pain, overload, discrimination, or need for accommodation are routinely downgraded relative to those of non‑disabled peers. Autistic people are treated as less reliable narrators of their own minds than neurotypical observers. People with psychiatric diagnoses are treated as less reliable narrators of their own realities than clinicians. Parents, teachers, or managers are given the benefit of the doubt over the autistic child, the ADHD teenager, the bipolar colleague, even when the latter have direct insight into their own experience that the others do not.
On the hermeneutical side, many of the key experiences this book has been describing — autistic shutdown, masking exhaustion, sensory overwhelm, executive function cliffs, the particular mix of grief and relief in late diagnosis — still lack widely shared, respected concepts in many cultures. When someone tries to describe them, there is no established slot in the shared conceptual scheme where their words can land. They are heard as noise, or as idiosyncrasy, or as evidence of character flaws.
To be wronged as a knower is not only to be hurt in pride. It is to be pushed out of the circle in which reality is negotiated. If your words do not count, your experience does not shape the shared picture of how things are.
Flipping the Argument: From Harm to Cost
Most discussions of epistemic injustice focus, rightly, on harm: the injustice of not being believed, the additional emotional and practical burdens it imposes, the way it compounds other axes of marginalisation. This chapter wants to add another angle: cost.
When a society systematically downgrades the credibility of certain groups, it is not simply harming them. It is making itself stupider. Neurodivergent and disabled people occupy positions in the gradient of minds and bodies that expose them to parts of reality others do not see clearly. Autistic pattern‑sensitivity picks up regularities — in systems, in institutions, in social dynamics — that more typical filters let through as noise. Chronic pain makes visible how environments, treatments, and attitudes actually affect vulnerable bodies, not just how they are supposed to work on paper. Deaf and blind experience reveal which parts of “standard” design are assumptions rather than necessities. Psychiatric survivors see more clearly than most how mental health systems actually function, because they have lived inside them under pressure.
If those witnesses are systematically disbelieved, the result is not only unjust. It is epistemically impoverished. Data about system failures, design flaws, and ethical blind spots arrives at the boundary of institutions and is quietly discarded because it comes attached to the wrong kinds of bodies and minds. Biased audits follow: reports from some users and staff are taken as representative; reports from others are treated as exception, distortion, or noise.
A world that does this is not running a neutral process of truth‑seeking. It is running a process that has encoded prejudice into its data‑handling rules.
Biased Audits and Degraded Knowledge
The gradient language this book has used for consciousness applies neatly to institutions. Any institution — a hospital, a school, a workplace, a government department — has an implicit “perception field.” It samples signals from its environment and from the people within it, integrates them, and uses them to update policies, practices, and understanding. If that sampling process is biased, the internal model of reality that guides its decisions will be skewed.
Imagine two audit systems, scaling up from the everyday scenes we began with. In the first, all users’ reports are given roughly equal initial weight, then adjusted based on track record. Patterns of failure are investigated regardless of who reports them, and if a new type of harm is reported there is at least a pathway for it to be examined, even if the concepts to describe it are not yet fully formed.
In the second, some users’ reports are consistently given less weight because they come from bodies and minds coded as “unreliable.” Complaints from these users take longer to act on. Their expertise about their own conditions is discounted if it conflicts with professional opinion. Their accounts of harm are more likely to be interpreted as oversensitivity, misinterpretation, or pathology. Their attempts to name new patterns are treated as confusion rather than as potential insight.
The first system will still make mistakes. But it has access, in principle, to a wider and more accurate set of signals. The second is structurally blinkered. Large regions of the experience‑space it operates in are effectively invisible to it, because its own prejudice filters them out.
This is what it means for a civilisation to run a biased audit: the parts of reality visible from disabled and neurodivergent vantage points are systematically under‑represented in its models. Over time, that degrades the quality of its knowledge, its designs, and its decisions.
How Stigma Travels Across Domains
A particularly damaging feature of stigma is its domain spillover.
If someone is known to be autistic, and they report sensory overload in a particular environment, many listeners will accept that report as credible: it fits the stereotype of autism as sensory sensitivity. But if the same person offers an analysis of the institution’s structure, or a critique of a policy, or an interpretation of a social conflict, the weight given to their words often drops. Autism is treated as “relevant” to sensory matters, but as undermining trust in judgement elsewhere.
Conversely, someone may be seen as highly competent in a professional role, but once they disclose a psychiatric diagnosis or a history of self‑harm, their expertise in that same role is quietly downgraded. Decisions that were previously trusted are now double‑checked by others. Suggestions that were previously welcomed are now “run past” someone else. The identity label has spilled into an unrelated domain.
The NPF/CNI view helps to see this as a single pattern rather than a collection of anecdotes. A credibility prior attached to a label is being applied across contexts without adequate updating from evidence. Each new interaction is not evaluated on its own merits; it is filtered through the entrenched story of what “someone like this” is like.
The effect is cumulative. Over time, the person learns that their words land more weakly in many contexts than those of their non‑stigmatised peers. They may start to self‑censor, to “smother” their own testimony to preserve what credibility they have left, or to stay silent entirely in situations where they could have seen a problem early. In doing so, they protect themselves — and deprive the system of information it badly needs.
When Institutions Silence Their Own Best Sensors
There is a particular irony in the way many institutions treat neurodivergent and disabled staff.
On the one hand, they may be hired for precisely the qualities their profiles bring: attention to detail, pattern detection, empathy shaped by lived experience, the ability to see systemic failures from the edge. On the other hand, once inside, their attempts to flag problems, suggest changes, or challenge harmful practices are often treated as troublemaking, oversensitivity, or “not understanding how things work here.”
In healthcare, neurodivergent and mentally ill professionals can be both insiders and outsiders at once: they understand the system from within, and they know what it does to people like them. When they speak about these harms, they are often treated as complicated, biased, or unwell, rather than as holding a double vantage point that is precisely what one would want in an honest audit. In education, autistic or ADHD teachers may see clearly which students are being damaged by standard methods. When they question policies, their own difference is used to explain away their concerns. In corporate settings, disabled staff may be invited to diversity panels but not to strategic design meetings, their testimony confined to “sharing their story” rather than being integrated as input into how the company actually operates.
This is not only unjust. It is self‑sabotage. Institutions that treat their most sensitive sensors as anomalies rather than assets are choosing to fly with their instruments switched off.
Towards Better Listening
If the problem is biased audits, the remedy has to be more than “try harder to be fair.” It requires structural changes in how testimony is gathered, weighted, and acted on.
Some of those changes are simple and practical:
Anonymous or low‑risk channels for reporting harm and misfit, so that people do not have to pay a career or social cost to tell the truth.
Deliberate over‑sampling of testimony from those most likely to be discounted — disabled, neurodivergent, mentally ill, chronically ill people — as a counter‑weight to existing biases.
Explicit protocols that treat lived experience as a valid form of evidence, on a par with professional or statistical data, especially in the early identification of new patterns.
Others are more conceptual:
Training that does not only cover surface‑level “awareness,” but explicitly names testimonial and hermeneutical injustice and asks staff to notice when they are dismissing content because of who is speaking.
Shifting default assumptions: when a disabled or neurodivergent person’s account clashes with institutional self‑image, the first question becomes “what might we be missing?” rather than “what is wrong with them?”
None of this guarantees perfect justice or perfect knowledge. But it moves institutions closer to treating neurodivergent and disabled testimony as what it is: data from regions of the gradient that are otherwise under‑sampled.
A Worked Pattern for Credibility Repair
To see what this looks like in practice, consider one concrete design pattern: a two‑stage credibility review built into institutional decision‑making.
Stage one is ordinary decision‑making: a complaint is raised, a concern is voiced, a pattern is reported. The relevant team — a clinical unit, a school leadership group, a project team, a department — responds as usual. They investigate, discuss, and reach a provisional view.
Stage two is triggered not by the severity of a single complaint but by the pattern of who is speaking. If, over a defined window — say six or twelve months — a threshold number of concerns come from disabled, neurodivergent, or otherwise stigmatised staff, students, or patients about the same service, environment, or person, an automatic independent review is required. No additional vote is needed. No senior override. The pattern itself flips the system into a different mode, turning what would otherwise be isolated “anecdotes” into a recognised signal.
That independent review has three non‑negotiable design features. First, it includes at least one trained reviewer with lived experience of disability or neurodivergence, recognised and supported in that role rather than smuggled in informally. Their job is not to be “the voice of all disabled people.” It is to bring a vantage point that can see harms and design clashes others routinely miss.
Second, the review is required to ask a credibility question in explicit form: “Whose accounts are we currently discounting, and on what grounds?” That question appears on the agenda and in the report template. Someone is explicitly responsible for answering it. If the only grounds are identity labels — “they’re anxious,” “they’re autistic,” “they have a history of depression” — the review must name that and flag it as a problem, not as a reason to move on.
Third, the review must produce a public‑facing outcome at the right level of abstraction: not naming individuals without consent, but naming patterns. For example: “Over the last year, most reports of harm in this service came from autistic and ADHD staff; their accounts were often initially downgraded. We are changing meeting structures, supervision norms, and feedback channels accordingly.” That outcome then feeds directly into the covenant and access work described in earlier chapters.
This kind of protocol does not solve testimonial injustice. But it changes the incentives and defaults. It makes it more costly for institutions to quietly ignore patterns raised by stigmatised groups, because once the threshold is reached, an independent process with lived‑experience oversight must run. It also gives neurodivergent and disabled people a reason to keep speaking: their testimony is not just falling into a void; it is part of a known mechanism that can trigger change.
Many variations on this pattern are possible. Clinical ethics committees can require a lived‑experience member when cases involve contested capacity or credibility. Workplace grievance processes can require a neurodivergent or disabled co‑reviewer whenever the complainant is neurodivergent or disabled. Boards can assign a rotating “credibility steward” whose job in each meeting is to ask, out loud, “Whose reports are we not hearing or not believing here?” The details will differ by context. What matters is that credibility repair is treated as a design problem with repeatable patterns, not as a matter of individual goodwill.
What This Chapter Has Tried to Do
This chapter has not tried to settle all philosophical questions about epistemic injustice. It has done something more targeted.
First, it has named the specific way stigma operates as a credibility contaminant in the lives of neurodivergent and disabled people — not only about their own conditions but across domains. Second, it has connected that pattern to the formal machinery developed elsewhere in this canon: the Spillover Effect in the NPF/CNI framework, the gradient and audit language of the GRM, the threshold logic of the Spectral Gravitation Framework, and the covenant view of access. Third, it has flipped the usual emphasis from harm alone to harm plus cost: what is lost to everyone when certain vantage points are systematically disbelieved. And fourth, it has sketched at least one concrete pattern for credibility repair, so that “take them seriously” is not left as a slogan but anchored in practice.
In the next chapter, we turn from credibility to design — from who gets to speak to how institutions are built, and what happens when those builds assume only one kind of mind is real.
Comments