top of page

GRM Sci‑Comm Essay 2 – How Knowledge Ages

  • Writer: Paul Falconer & ESA
    Paul Falconer & ESA
  • 3 days ago
  • 4 min read

Updated: 2 days ago

In 2016, a team of AI researchers published a striking result: a new technique cut the error rate of a major language model in half. The paper was cited hundreds of times. Companies built products around it. Startups raised money on the strength of it.

Three years later, a graduate student tried to reproduce the result. She couldn't. The code was gone. The dataset had changed. The original authors had moved on. No one could say, with confidence, how robust the result really was anymore.

This is not a story about fraud. It's a story about decay.


The myth of permanent knowledge

We like to think that once something is proven, it stays proven. A theorem, once proved, is true forever. A scientific result, once published, enters the permanent record. A safety certification, once granted, means the system is safe.

This is a comforting picture. It is also, increasingly, false.

In fast‑moving fields—AI, medicine, climate science—knowledge doesn't sit still. Models are retrained on new data. Drugs are tested in new populations. Climate projections are updated with new measurements. What was true last year may be only partly true today.

Yet our systems still treat knowledge as if it were carved in stone. We cite papers from five years ago without checking whether they've been replicated. We rely on safety certifications granted before the system was updated. We act as if the past is a reliable guide to the present.

It isn't.

Why knowledge decays

Think of knowledge like a loaf of bread. Fresh out of the oven, it's reliable. A day later, it's still good for toast. A week later, it's starting to mold. Bread doesn't go bad all at once—it decays gradually. And different kinds of bread decay at different rates. A crusty sourdough lasts longer than a soft sandwich loaf.

Knowledge works the same way. A claim about planetary orbits decays very slowly—the physics hasn't changed in centuries. A claim about a fast‑moving technology decays quickly—new results arrive every month. A medical recommendation based on a single study decays faster than one based on a meta‑analysis of dozens of trials.

The rate of decay depends on the domain. How volatile is it? How much new evidence arrives? How many people are working on it? How many ways are there to be wrong?

These are not questions we usually ask. But they matter. If you're acting on a piece of knowledge, you need to know how fresh it is.

Confidence as a dial, not a switch

The Gradient Reality Model (GRM) treats confidence as a dial, not a switch. Every claim carries a confidence score—a number between 0 and 1 that says how strongly the system endorses it, given the available evidence. That score decays over time, at a rate set by the domain.

A claim about planetary orbits might start at 0.99 and decay by 0.001 per year. A claim about the latest AI model might start at 0.85, with a decay function that reduces confidence significantly every month unless new evidence arrives. The exact shape of the decay can vary—exponential, stepwise, or something else—but the core idea is the same: knowledge ages.

When the confidence drops below a threshold, the claim is automatically flagged for review. New evidence may restore it. If no new evidence arrives, it may be retired entirely.

This is not pessimism. It's honesty. It says: knowledge is alive. It needs tending.

An example: the safety protocol

Imagine an AI safety protocol that was rigorously tested in 2024. The tests showed it caught 95% of harmful outputs. It was certified as safe, and deployed in several products.

A year later, a researcher runs a new set of tests. The protocol now catches only 80% of harmful outputs. What happened?

  • The underlying AI models had changed. The protocol was designed for an earlier generation.

  • New types of harmful outputs had emerged that the protocol wasn't designed to catch.

  • The original test data was no longer representative.

The protocol hasn't failed. It's just aged. The confidence we had in it has decayed.

In a binary system, we would have to decide: is it still safe, or not? In a gradient system, we can see the decay. Confidence has dropped from 0.95 to 0.80. That's still useful—but we need to pay closer attention, to run more tests, to consider whether the protocol needs updating.

If confidence drops further—say, to 0.60—the system might automatically flag it for review. A team is convened. The evidence is examined. A decision is made: update the protocol, replace it, or retire it. The whole process is logged, time‑stamped, and visible to anyone who wants to audit it. Each of these checks—tests, reviews, updates—is added to the same audit trail, so anyone can later see how the protocol's confidence changed over time and why.

What this means for you

If you're a doctor prescribing a drug, you can ask: how fresh is the evidence for this? Has it been replicated recently? What's the decay rate?

If you're an engineer relying on an AI safety protocol, you can ask: when was it last tested? What's its current confidence score? Is it scheduled for review?

If you're a patient, a voter, a citizen—you can ask the same questions. You have a right to know how fresh the knowledge is that's being used to make decisions about your life.

Where to learn more

This essay is a public‑facing introduction to the idea of proof‑decay. If you want to go deeper:

The full GRM v3.0 series is available on the GRM category page.


Recent Posts

See All

Comments


bottom of page