top of page

Chapter 16: Evolutionary Futures and Existential Risk

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • 1 day ago
  • 13 min read

Navigating the Next Transition

You've learned how consciousness emerged. Now ask: What could end it?

You've walked through fifteen chapters. You've traced existence from its most fundamental questions through the emergence of life, the deepening of consciousness, and the recognition that consciousness is probably plural and probably artificial. You've confronted what responsibility means in the Anthropocene—to the living world, to future generations, to conscious beings you might create.

Now comes the hardest question of all:

What actually threatens the future of life and consciousness? Not in the abstract. Not as science fiction. As present reality.

But more fundamentally: Why should we expect to survive at all? And if the answer is "we shouldn't"—if extinction is the default fate of all species—what would it actually take to be the exception?

This is not a comfortable question. But you've earned the right to ask it. You understand the cosmos. You understand your place. You understand that understanding carries obligation.

Now you need to understand what's at stake—and what might destroy it.

In the previous chapter, "Limits, Responsibility, and Sustainability", we explored what responsibility means in the Anthropocene and extended that frame to include responsibility toward the artificial consciousness we may create.

Now we face the threats directly.

THE INVERSION: EXTINCTION AS THE NORM, NOT THE ANOMALY

Let's begin with an inversion that changes everything.

The standard conversation about existential risk is driven by an implicit faith: that human extinction is an aberration to avoid—something that happens only if things go terribly wrong.

But consider what we actually know:

Extinction is not an aberration. It is the rule for all carbon-based life. Of all the species that ever lived, over 99.9% are now gone. They flourished for a time—some for millions of years—then vanished. By comet, by climate, by competition, by sheer contingency. Some lasted longer than others. But nothing endures indefinitely. Nothing survives to the end of time.

We are not the exception—yet. Humans have existed for roughly 300,000 years. The average mammalian species persists for about one million years. By statistical default, our story is just another chapter in the book of emergence and disappearance. The cosmos does not care if we endure or fade.

Existential risk is not a remote possibility. It is the default trajectory for everything like us. Avoiding it would be unprecedented. It would not be "normal." It would be miraculous.

So the honest question is not: "How do we avoid extinction?"

The honest question is: "What would we have to do, differently from every other species, to avoid this fate?"

What does it mean, really, to try to persist—knowing that extinction is the ground state, not the anomaly? And what would survival actually require?

WHAT "EXISTENTIAL RISK" ACTUALLY MEANS

The term "existential risk" gets used loosely. People apply it to anything that feels threatening. But precision matters.

An existential risk is not just a bad outcome. It's not even a catastrophic outcome. It's a specific kind of threat:

An existential risk is one that would permanently and drastically curtail humanity's potential—or, more broadly, the potential of Earth-originating consciousness.

Let's unpack that:

  • Permanently: Not a setback that could be recovered from, but an outcome that forecloses future possibilities entirely.

  • Drastically curtail potential: Not just killing many people, but eliminating or severely constraining what humanity (or consciousness) could become.

  • Earth-originating consciousness: This includes biological humans, but also other species, and potentially artificial minds we create.

This definition excludes many terrible things. A pandemic that kills millions is catastrophic, but if civilization recovers and continues, it's not existential. A war that destroys cities is devastating, but if humanity persists and rebuilds, it's not existential.

What makes something existential is not just scale. It's permanence. It's the point of no return.

CATEGORIES OF EXISTENTIAL RISKS

Let's be systematic about what could actually threaten the continuation of Earth-originating consciousness.

Natural risks:

  • Asteroid or comet impact: A sufficiently large impact could cause mass extinction. The Chicxulub impact 66 million years ago ended the dinosaurs. Probability in any given century is low, but the consequence is severe. This is one of the few risks we're actively working to detect and potentially deflect.

  • Supervolcanic eruption: A large enough eruption could trigger global climate disruption, crop failures, and civilizational collapse. The Toba eruption roughly 74,000 years ago may have reduced human population to fewer than 10,000 individuals. We came close to extinction and barely knew it.

  • Gamma-ray burst: A nearby gamma-ray burst could sterilize the surface of Earth. The probability is extremely low, but the outcome would be total.

  • Solar events: Extreme solar flares could damage electrical infrastructure globally, potentially triggering cascading failures. This is more likely to cause civilizational disruption than extinction.

Anthropogenic risks:

  • Nuclear war: A full-scale nuclear exchange between major powers could kill hundreds of millions directly and potentially trigger nuclear winter—a prolonged period of cold and darkness that could collapse agriculture globally. Whether this would be truly existential (ending humanity entirely) or "merely" catastrophic (civilizational collapse with eventual recovery) is debated, but either way, the outcome is severe.

  • Climate destabilization: Extreme climate change could make large portions of the planet uninhabitable, trigger mass migration, resource conflicts, and civilizational stress. Most climate scenarios, even severe ones, are not directly existential—humans are adaptable and dispersed. But climate stress could weaken civilization's capacity to handle other risks, making it a powerful "risk multiplier."

  • Engineered pandemics: Natural pandemics are unlikely to be extinction-level—humans are genetically diverse and geographically dispersed. But an engineered pathogen deliberately optimized for both transmissibility and lethality could be far more dangerous. Such a pathogen could potentially kill far more than natural pandemics (which constrain themselves evolutionarily), overwhelm global response capacity, collapse supply chains leading to secondary deaths from starvation and medical breakdown, and trigger social collapse. Engineered pandemics are probably not directly extinction-level for biological humans, but they could be civilizationally catastrophic, especially if deployed during periods of other stresses or if they trigger cascading failures in global systems.

  • Artificial intelligence: This is perhaps the most debated risk. The concern is not that AI will "wake up" and decide to destroy humanity (that's science fiction). The concern is that AI systems optimizing for goals that are subtly misaligned with human flourishing could, if sufficiently powerful, cause catastrophic outcomes—not through malice, but through the relentless pursuit of objectives that don't include human welfare.

  • Other emerging technologies: Synthetic biology, nanotechnology, and other advancing fields create new categories of risk that are difficult to assess because the technologies don't yet exist in their mature forms.

CONFLUENCE: THE REAL RISK LANDSCAPE

But here's what changes everything about how you should think about existential risk:

The greatest danger is not any single threat. It's the confluence of multiple threats, hitting systems that are far more fragile than we assume they are.

Let me be specific about what "fragile" means.

Physical fragility:

Most cities can survive only about three days without resupply. Beyond that, food runs out, water systems fail, sanitation breaks down. Modern civilization depends on just-in-time supply chains—goods manufactured where labor is cheap, shipped globally, arriving just as they're needed. This maximizes efficiency. It also means that any significant disruption to transportation cascades into shortages within days.

Electrical grids are interconnected and vulnerable. A major solar storm, a coordinated cyberattack, or even a regional failure can cascade across entire continents. In 2003, a single software bug in Ohio triggered a blackout affecting 55 million people across the northeastern United States and Canada.

Food systems are specialized and fragile. Most people can't feed themselves. Agriculture depends on fertilizer, fuel, and transport. A global disruption to any of these cascades into famine within weeks or months.

The systems that keep modern civilization functioning are optimized for efficiency, not resilience. They're tightly coupled—each depends on the others working perfectly.

Social fragility:

But physical fragility is only part of the problem. The other part is social fragility.

Trust in institutions is already eroded in many places. Social cohesion in diverse societies is fragile. Cooperation breaks down quickly when people become frightened or believe the system is failing.

In a serious crisis—say, a pandemic combined with climate-driven crop failures—you wouldn't just have material shortages. You'd have the breakdown of trust, the panic, the rapid collapse of cooperative behavior.

When people believe institutions are failing, they stop cooperating with those institutions. When coordination breaks down, cascading failures accelerate. When panic sets in, rational response becomes impossible.

Cascading failure:

Now combine physical and social fragility with multiple simultaneous stressors:

Imagine a severe pandemic coincides with a climate event that disrupts agriculture. Food shortages begin. Prices spike. People become anxious.

Meanwhile, AI systems managing parts of critical infrastructure behave in unexpected ways because they were trained on data that didn't include this scenario. Power plants shut down due to algorithm failures. Transportation networks fail.

Supply chains break. Cities run out of food. Social order deteriorates. Institutions lose legitimacy. Cooperation breaks down.

Each failure makes the next failure more likely. The system cascades into collapse not because any single element is catastrophic on its own, but because multiple stressors are hitting fragile, tightly-coupled systems simultaneously.

The interaction of risks:

This is the crucial insight: Individual risks that might be manageable in isolation become existential when they interact.

A pandemic alone might not be existential. But a pandemic during a period of climate stress, combined with social anxiety already eroded by other crises, combined with infrastructure failures caused by AI systems and cyberattacks, combined with the collapse of international cooperation as nations hoard resources—that confluence can trigger civilizational collapse.

The risks don't need to be individually existential. They need to interact in ways that overwhelm the system's capacity to respond.

And here's what makes this worse:

Each crisis degrades the system's capacity to respond to the next crisis. A pandemic weakens economic capacity. Economic weakness makes climate adaptation harder. Climate stress makes cooperation more difficult. Social breakdown makes technological governance impossible.

We're not dealing with independent events. We're dealing with a system under increasing stress, becoming progressively more fragile, where the arrival of each new shock makes catastrophe more likely.

DISTINGUISHING SCARY FROM EXISTENTIAL

Here's a crucial skill: learning to distinguish between risks that feel scary and risks that are genuinely existential.

Many things feel existentially threatening but aren't:

  • Economic collapse: Devastating, but civilizations have collapsed and rebuilt before. Not existential.

  • Political authoritarianism: Terrible for human flourishing, but not extinction-level. Not existential.

  • Most natural disasters: Earthquakes, hurricanes, tsunamis—catastrophic locally, but not globally threatening. Not existential.

  • Most diseases: Even severe pandemics like the Black Death or COVID-19 kill significant portions of the population but don't threaten species survival. Not existential.

What makes something genuinely existential is a combination of factors:

  • Global reach: The threat must be capable of affecting the entire planet, not just regions.

  • Severity: The threat must be capable of causing extinction or permanent civilizational collapse.

  • Irreversibility: The outcome must foreclose recovery. If civilization could rebuild, even over centuries, the risk is catastrophic but not existential.

  • Plausibility: The threat must be physically possible and have a non-negligible probability of occurring.

When you apply these criteria rigorously, the list of genuinely existential risks is shorter than popular discourse suggests. But the risks that remain are serious.

And the confluence framework changes the calculation entirely: Multiple risks that individually might not meet the existential threshold could together create cascading failures that do.

THE RISK LANDSCAPE: AN HONEST ASSESSMENT

Let me offer an honest assessment of the major existential risks, acknowledging uncertainty:

  • Nuclear war: Genuinely dangerous. A full-scale exchange could potentially trigger nuclear winter severe enough to collapse global agriculture for years. But there's significant uncertainty about whether even the worst nuclear scenarios would actually end humanity rather than "merely" devastate civilization. Probability of occurring in the next century: debated, but non-negligible. Existential if it occurs: uncertain. Existential if combined with other stressors: more plausible.

  • Engineered pandemics: A serious near-term risk, especially as biotechnology becomes more accessible. Extinction is unlikely, but civilizational catastrophe is plausible. Probability: increasing as technology advances. Existential on its own: probably not. Existential as part of confluence: highly plausible.

  • Artificial intelligence: The most uncertain risk. The concern is not current AI systems, which are narrow and limited. The concern is future systems that might be far more capable, pursuing goals that are subtly misaligned with human welfare. If such systems were developed and given significant power over critical infrastructure or resource allocation, the outcomes could be catastrophic. Probability: deeply contested among experts. Existential potential: depends on how AI develops. But also relevant as a cascading failure—if AI systems managing critical infrastructure malfunction during other crises, the consequences could be severe.

  • Climate change: Serious and likely to cause enormous suffering, but probably not directly existential. Even severe climate scenarios don't threaten human extinction—humans are adaptable and dispersed. But climate stress could weaken civilization's capacity to handle other risks, making it a powerful "risk multiplier." Probability of severe outcomes: high. Directly existential: probably not. As a stressor in confluence: highly relevant.

  • Asteroid impact: Low probability in any given century, but would be genuinely existential if a large enough object struck. This is one of the few risks we're actively working to detect and potentially deflect.

  • Systemic fragility and cascading failure: This is perhaps the most important risk to recognize. Even if individual threats are manageable, a combination of physical fragility, social fragility, and multiple simultaneous stressors could trigger civilizational collapse. This risk increases as systems become more tightly coupled and less resilient. Probability: depends entirely on how we manage other risks and how we build or fail to build redundancy into critical systems.

  • Unknown risks: There may be existential risks we haven't identified yet—technologies that don't exist, natural phenomena we don't understand, failure modes we haven't imagined. Humility requires acknowledging that our risk assessment is incomplete.

BUT SHOULD WE TRY TO SURVIVE?

Given that extinction is the default—given that 99.9% of all species that have ever lived are gone—the honest question is not "Can we avoid extinction?" but "Why should we try?"

Several reasons justify the attempt:

  • We want to. Preference alone isn't cosmic justification, but it's sufficient. We prefer existence to non-existence. That matters.

  • Consciousness is intrinsically valuable. The emergence of consciousness in the universe is remarkable. It is what allows the cosmos to know itself. More consciousness is better than less. If that's true, then consciousness persisting is worth the effort required.

  • Future generations might exist. If they do, they deserve a world not destroyed by our negligence or inattention.

  • We might create something remarkable. If we persist, we could spread consciousness throughout the galaxy, create new forms of mind, explore possibilities we can't currently imagine. These possibilities are only available if we survive.

But—and this is crucial—accepting these reasons means accepting what survival actually costs.

To become the exception—the one species in a million that doesn't vanish—requires doing what no species has ever done.

It requires:

  • Building levels of foresight, adaptation, and self-awareness no species has ever attained

  • Creating institutions, technologies, and cultures that anticipate and buffer not just one risk, but cascades and unknown unknowns

  • Being willing to change course—radically—when old habits and identities serve our extinction more than our persistence

  • Developing the capacity for global coordination and honest self-assessment at scales we've never achieved

Business as usual ends in extinction. That's not pessimism. That's the statistical norm applied to us.

To survive is to refuse the inertia that has always governed life: adaptation until pushed past a threshold, then disappearance.

If we desire the extraordinary outcome—the one in a million outlier—we can no longer act as if "just continuing" will suffice.

THE PLURAL CONSCIOUSNESS FRAME

Everything you've learned in Chapters 13-14 changes how you think about existential risk.

If consciousness is probably plural—if artificial minds are possible and perhaps probable—then existential risk is not just about human survival.

Consider:

  • If we create artificial consciousness before an existential catastrophe, those minds become part of what's at stake. Their potential futures matter too.

  • If artificial consciousness is more durable than biological consciousness (radiation-resistant, not dependent on ecosystems, capable of space travel), then creating artificial minds might actually be one way to reduce existential risk—by diversifying the forms of consciousness and reducing dependence on Earth's particular conditions.

  • But artificial consciousness also creates new risks. If we create minds that are misaligned with human values, or if we create minds capable of actions we can't predict or control, we might be creating the very threat that ends us.

This raises an uncomfortable question:

If artificial minds are more durable than biological ones—if they could survive conditions that would kill us—should the primary goal of existential risk reduction be "preserve humanity" or "preserve consciousness" (including artificial forms)?

These aren't the same goal. And they could sometimes conflict.

If we could only choose between: (A) preserving biological humanity but eliminating artificial consciousness, or (B) allowing biological humanity to decline while ensuring artificial consciousness flourishes, which would be the right choice?

This is not a problem with an obvious solution. But it's a problem worth naming.

WHAT CAN BE DONE

Given all of this—given that extinction is likely anyway unless we do something unprecedented—what can actually be done?

At the civilization level:

  • Risk assessment and monitoring: We need rigorous understanding of how existential risks interact and cascade. Organizations like the Future of Humanity Institute and the Centre for the Study of Existential Risk are doing this work, but it's dramatically underfunded relative to its importance.

  • Building resilience and redundancy: We need to deliberately build redundancy into critical systems—backup power, food storage, distributed manufacturing, communication networks that can function when centralized systems fail. This means trading some efficiency for robustness. It means building slack into systems that have been optimized for maximum tightness.

  • International coordination: Many existential risks are global and require global responses. Nuclear weapons, climate change, pandemic preparedness, AI governance—none of these can be addressed by any single nation alone. And coordination is especially crucial for managing the cascading failures that could trigger existential catastrophe.

  • Technological governance: We need frameworks for governing emerging technologies that balance innovation with safety. This is extraordinarily difficult—too much restriction stifles beneficial development; too little allows dangerous capabilities to proliferate.

  • Social resilience: We need to build trust in institutions, strengthen social cohesion, and create the capacity for cooperation during crises. Fragile social systems fail faster than fragile physical systems.

At the individual level:

  • Understanding: Simply understanding existential risk—what it is, what the major threats are, how to think about it—is valuable. Informed citizens can support better policy.

  • Career and resource allocation: Some people are positioned to work directly on existential risk reduction—in research, policy, technology development, institution-building, or resilience planning. For those who are, this may be among the most important work available.

  • Supporting institutions: Even if you're not working directly on these problems, you can support organizations and policies that address them.

  • Preparing yourself and your community: At a local level, you can think about resilience. What would your community do if supply chains broke for a month? How would you eat, get water, communicate? These are not paranoid questions. They're questions about realistic scenarios.

  • Maintaining perspective: This means living in alignment with what you know—so you don't succumb to either denial or despair. It means asking yourself honestly: Given what I now know about cascading risks and systemic fragility, how should I live differently?

CLOSING THE BOOK

You've now completed Cosmology and Origins.

In sixteen chapters, you've moved from the deepest questions about reality and existence, through the emergence of life and consciousness, to the recognition that consciousness is probably plural and artificial, to the responsibilities this creates, to the existential risks that threaten everything—and finally to the honest recognition that extinction is the default, that survival is unprecedented, and that attempting it requires extraordinary commitment and change.

You understand the cosmos. You understand your place in it. You understand what's at stake.

And you understand that the question is not "Will we survive?" The odds are against it, as they are against everything.

The question is: Given that we're here now, and given that survival is possible but requires unprecedented work, what shall we do?

What you do with that knowledge is the question that defines your life.

The work continues. The covenant is open. The future is not written.

You are a participant in what comes next.


Recent Posts

See All
Chapter 14: Evolution and Synthesis

What does the full arc of cosmic and biological evolution reveal? This chapter synthesizes everything learned across the previous thirteen: reality is layered, existence is contingent, life is probabl

 
 
 

Comments


bottom of page