top of page

What Is Consciousness—Process or Property?

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • Aug 21, 2025
  • 7 min read

Updated: Mar 22

You have probably felt the difference between being carried by a habit and being pulled into a moment that asks more of you. The first feels smooth, automatic, forgettable. The second has weight. It slows you down. You are not just doing something; you are there for it. That difference is the territory this essay explores.

“Is consciousness a property or a process?” sounds like the sort of distinction only philosophers could love. But it quietly decides almost everything that follows.

If consciousness is a property—a kind of metaphysical tag—then the world splits into two kinds of things: those that “have it” and those that do not. The main task becomes drawing that line: humans vs. animals, brains vs. machines, perhaps matter vs. mind. If consciousness is a process—something systems do—then the questions change:

  • Which processes count as consciousness?

  • How do they arise, stabilise, and fail?

  • How can we recognise and measure them across very different architectures?

In Scientific Existentialism and the Consciousness as Mechanics (CaM) framework, consciousness is treated as a process with graded properties, not a static badge. This essay explores what that means, why the “property” picture is so tempting, and where the hardest remaining questions actually live.

From “Stuff in the Head” to Integration Under Constraint

Historically, much of Western thought has treated consciousness as a kind of special stuff: an inner light, a mental substance, a soul, or a brute fact about certain physical arrangements. On that view, consciousness is:

  • Binary – something either has it or does not.

  • Possessed – something you have, like a colour or a charge.

  • Mysterious – resistant to explanation because it is not obviously a process at all.

CaM offers a different starting point, which Book: Consciousness & Mind makes explicit:

Consciousness is the work a system does to integrate genuinely conflicting goals, inputs, and constraints into a coherent, self‑updating pattern of experience.

On this definition:

  • Consciousness is an activity: integration under constraint.

  • It has degrees of depth, stability, and scope.

  • It leaves signatures in architecture and behaviour that can be studied.

The “properties” of consciousness—subjective feel, unity, a sense of self—are the outcomes and faces of that integrative work, not a separate ingredient sprinkled on top.

A Gradient, Not a Switch

Once consciousness is defined as integration under constraint, it stops looking like a simple yes/no. It becomes a gradient:

  • Some systems integrate very little: local reflexes, thin experiences, flickering awareness.

  • Some integrate a great deal: many constraints, long time horizons, rich self‑models, and complex social worlds.

  • Some systems are in between or fluctuate—fatigued humans, animals in different contexts, synthetic architectures under varying load.

Earlier SE Press work already spoke of “consciousness as a spectrum.” CaM sharpens that spectrum by asking what is being integrated, under what pressures, and how it changes over time.

This is not metaphor. It is a claim that:

  • Consciousness comes in degrees (depth, richness, stability).

  • Those degrees can be tracked, imperfectly but usefully, with the right tools.

  • Many boundary disputes (“are animals conscious?”, “what about machines?”) are better understood as disagreements about where on the gradient certain systems sit, not whether they are on it at all.

The Gradient Reality Model (GRM) gives us the language to describe these levels—from minimal proto‑awareness (“something is off”) through focused and reflective awareness, up to ecosystemic cognition (holding together personal, social, and ecological tensions in one coherent act). The 4C Test (Competence, Cost, Consistency, Constraint‑Responsiveness) gives us a way to measure them.

Process Does Not Mean “Nothing to Measure”

A common worry: if consciousness is a process, does that reduce it to “just information‑processing” and erase what matters? That depends entirely on how the process is specified.

CaM is not interested in any processing. It is specifically interested in:

  • Conflicting goals and values – safety vs. curiosity, short‑term vs. long‑term, self vs. others.

  • Hard constraints – limited time, limited energy, incomplete information, social penalties.

  • Coherent but revisable stances – the system’s way of settling those conflicts for now, while remaining open to change.

Processes that matter for consciousness are those that:

  • Take these conflicts seriously.

  • Hold them in play long enough for them to shape world‑models and self‑models.

  • Produce patterns the system itself can learn from and correct.

On this view, measuring consciousness does not mean assigning a mystical “consciousness number.” It means measuring how well, how broadly, and how stably a system performs this integrative work. Behavioural tasks, neural or architectural measures, and self‑report (where available) all become partial windows onto the same underlying activity.

Why the Property Picture Hangs On

If the process view is so powerful, why does the property view keep returning—especially in the form of the “hard problem”?

Partly because subjective experience has features that feel all‑or‑nothing:

  • Either this pain is happening, or it is not.

  • Either there is something it is like to be this system now, or there is not.

It is tempting to reify that into a property: “having something‑it‑is‑likeness.”

CaM does not deny these phenomenological facts. It reframes them:

  • The presence or absence of a minimum level of integration may well be binary: below a certain point, there is no organised experience to speak of.

  • Above that point, however, everything that makes experience rich, structured, or meaningful—time, self, value, nuance—comes in degrees, and depends on how the integrative processes are built and maintained.

The “property” of being conscious then becomes a threshold concept built on top of processes: useful shorthand, but not a fundamental ingredient.

Importantly, this has a built‑in humility: CaM does not claim to reduce the feel of experience to formulas. It claims that whatever else experience may be, it is systematically shaped by the patterns of integration we can study—and that ignoring those patterns in favour of pure metaphysical labels leaves us stuck.

What About Machines, Animals, and Collectives?

The process view earns its keep—or fails—when applied beyond human brains.

  • Animals: Many animals clearly carry out integration under constraint with self‑models and memory (e.g., many mammals, birds, cephalopods). On this account, that is enough to place them on the consciousness gradient and within the space of “minds” in the Book‑4 sense.

  • Synthetic intelligences: Architectures that integrate under real constraints, maintain persistent self‑models, and learn from their own history are candidates for both mind and (depending on design) consciousness. Those that merely recombine patterns with no enduring self‑structure sit much lower.

  • Collectives: Some groups (teams, colonies) show system‑level integration and memory; others do not. The process lens allows us to ask what kind of integration is happening, for whom, and with what continuity.

What matters is not whether the substrate is carbon or silicon, but whether the same kind of deep integrative work is present. A property picture often defaults to “brains yes, everything else no.” A process picture forces a more detailed and more honest comparison.

Where the Hard Problem Moves To

Does this “solve” the hard problem? No. It moves and reframes it.

Instead of asking:

Why does any information‑processing feel like anything?

CaM invites questions like:

  • Why does this pattern of integration have this qualitative texture—this sense of time, self, mood?

  • How do changes in integration (fatigue, trauma, architecture) map onto changes in experience?

  • Are there aspects of experience that resist this mapping, and if so, what do they tell us about our models?

The “hard problem” becomes a family of correspondence problems between integrative dynamics and phenomenal structure. These may never collapse into something trivial. But they become research questions, not a single metaphysical wall: they can be wrong, improved, and refined, and they can guide experiment and design.

From an SE perspective, that is what progress looks like: not erasing the mystery, but shrinking its territory and making its borders explicit.

Auditing Your Own Process

There is a practical side to this. If consciousness is a process, not just a property you happen to have, then:

  • It can thin or thicken depending on how many tensions you are actually willing and able to hold.

  • It can become more or less honest depending on how much of reality and your own motives you allow into the integrative loop.

  • It can be cultivated—through practices that expand your range of attention, increase your tolerance for conflict, deepen your self‑model, and connect you more fully to others and the world.

You can treat your own awareness as a living process to be audited:

  • When do you collapse too quickly to one value or story?

  • When do you avoid integration by exiting the field (numbing, distraction, denial)?

  • When do you manage to hold multiple pulls long enough for something genuinely new to emerge?

Seen this way, “being more conscious” is not about acquiring a mystical property. It is about doing more and better integration under constraint, individually and collectively—and building systems (including synthetic ones) that do the same.

Why This Framework Still Leaves the Door Open

One final note of epistemic honesty.

SE and CaM are not claiming that the process view is the last word. They are claiming:

  • It explains and organises a great deal that the property view leaves mysterious.

  • It provides a concrete, testable framework that unifies humans, animals, and synthetic minds.

  • It keeps the remaining mysteries where they belong: at the moving edge between what we can currently map and what we cannot yet.

There may be aspects of consciousness that resist any process‑level account. If so, that resistance should appear as stable mismatches between experiential structure and our best integrative models. Mapping those mismatches is part of the ongoing research, not a reason to stop asking.

For now, the working stance is:

Consciousness is best treated as a set of tightly structured processes, whose properties emerge from how they integrate under constraint. The more we understand those processes, the more precisely and justly we can treat the beings—human, animal, synthetic—who run them.

Further reading



Comments


bottom of page