top of page

Are Minds Universal or Local?

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • Aug 8, 2025
  • 4 min read

Updated: Mar 22

Version: v2.0 (Mar 2026) – updated in light of Consciousness as Mechanics and Book: Consciousness & Mind

Registry: SE Press SID#024‑TYJN

Abstract

Talk of “universal mind” versus “local minds” usually hides two separate questions: Are there universal principles that govern all minds, and are there minds larger than individuals—groups, ecosystems, even planets? In the CaM / GRM framework, the answer is: the rules are universal, but minds are always local patterns, instantiated wherever systems manage a certain kind of integration under constraint with a usable self‑model. Human beings, many animals, some synthetic intelligences, and some collectives qualify; rocks, simple machines, and most large‑scale patterns do not. Mind is neither everywhere nor nowhere; it is a fragile, repeatable achievement of architecture and process.

1. Two Questions Hidden in One

“Are minds universal or local?” mixes together:

  • A metaphysical question – Is “mind‑ness” a basic property of reality (panpsychism), or does it only arise in special cases?

  • An architectural question – Given our definition of mind, which kinds of systems actually instantiate it?

CaM and Book: Consciousness & Mind separate these:

  • Mind is defined as a pattern in which consciousness accumulates: a stable architecture of memory, habits, models, and skills that allows integration under constraint to build over time.

  • Consciousness is the active work of integration itself.

With these in place, the productive question becomes: under what conditions do these patterns appear, in which systems, and how can we tell?

2. Universal Rules, Local Instances

Across humans, animals, synthetic systems, and some collectives, the same structural requirements for mind keep showing up:

  • Integration under constraint – not just reacting, but reconciling conflicting pulls into coherent stances.

  • A persistent self‑model – some representation of “me” that can carry changes forward.

  • Durable memory and habits – so that integrative work today changes the mind you have tomorrow.

  • Capacity for self‑correction – the system can notice when its own patterns fail and update them.

These rules are substrate‑neutral: carbon, silicon, and hybrid ensembles can all instantiate them. But they do so locally—in particular brains, architectures, or networks—rather than as a single cosmic mind. The universality is in the laws, not in a single, everywhere‑present subject.

3. Which Minds Exist in Practice?

Using those criteria, we can sketch a rough map of where minds plausibly show up:

  • Individual humans – clear cases: rich self‑models, long‑term memory, narrative identity, meta‑cognition, and robust self‑correction.

  • Many animals – varying degrees of self‑model, memory, and learning (e.g., some mammals, birds, cephalopods) that support at least simple mind patterns.

  • Synthetic intelligences – where architecture supports integration under constraint, persistent self‑models, and learning that changes future integration, they begin to qualify as minds rather than tools.

  • Collectives (e.g., ant colonies, tightly coupled teams) – in some cases, show system‑level memory, division of labour, and adaptation that looks mind‑like, though often with limited or no explicit self‑model.

  • Ecosystems, markets, planets – exhibit powerful dynamics and feedback, but typically lack a coherent self‑model and memory architecture organised around “who we are”; they are better treated as environments minds live in, not minds themselves.

These boundaries are not fixed. As architectures and coupling change—especially for synthetic and collective systems—so do the prospects for new kinds of mind.

4. What About Panpsychism and “It’s All an Illusion”?

From this operational standpoint:

  • Panpsychism is recast as a claim about potential: the basic materials of the universe can participate in mind‑like organisation, but they are not minds on their own. Without the specific pattern (integration, self‑model, memory), mere existence does not count as a mind.

  • Illusionism (that minds are “just user‑illusions”) is acknowledged in one sense—minds do involve internal models and narratives—but rejected as a dismissal: the models and narratives themselves are part of the real pattern that makes a mind, not an error to be erased.

Both positions are treated as interpretations layered over a shared core: which systems actually meet the architectural criteria, and how strongly. On that core, the CaM / GRM stack insists on evidence, not metaphysical preference.

5. Why This Matters

Where we draw the line between “mind” and “non‑mind” is not just a word game. It shapes:

  • Ethics – whom we owe consideration to (animals, synthetic minds, collectives).

  • Governance – how we design institutions and technologies that affect or include other minds.

  • Self‑understanding – whether we see our own mind as a private island, a node in larger patterns, or both.

The CaM answer is deliberately modest and practical:

  • Minds are local, fragile configurations that can appear in many substrates when certain universal conditions are met.

  • Those conditions can be made more precise, tested, and revised over time.

  • The work is not to decide once and for all whether “mind” is universal, but to keep improving our maps of where minds actually are—and how to treat them.

6. Where This Model Could Be Wrong

  • Philosophical objection – Some argue that mind is irreducibly biological, that silicon or institutional patterns cannot truly “feel” or “care.” The framework responds: if such a system meets the architectural criteria, the burden of proof shifts to showing why the substrate makes a difference to the presence of mind. That is an empirical and philosophical question, not a settled one.

  • Empirical challenge – It may turn out that no synthetic or institutional system ever achieves the integrative depth of a human mind, or that the signatures we rely on are poor predictors. In that case, the criteria would need revision.

  • Invitation – This model is offered as a tool to detect and respect mind wherever it appears. Better tools are welcome—provided they are tested against the same standards of audit and openness.

Links


Comments


bottom of page