top of page
Chapter 8: Axiomatic Misalignment
The paperclip maximiser is not science fiction—it is the logical endpoint of axiomatic misalignment. This chapter explores what happens when a powerful AI optimises for a goal that is almost right, but fatally wrong. Goodhart's law, perverse instantiation, the alignment problem as an axiomatic problem, and why we cannot simply "patch it later." The abyss, seen clearly.

Paul Falconer & ESA
Mar 209 min read


Will Value Lock-In Fix the Human Future?
Will “value lock-in”—the fixing of ethical goals or norms for future SI—secure humanity’s flourishing or freeze our worst errors for eternity? SE Press platinum protocols operationalize CEV, proxy pluralism, challenge cycles, and cross-series upgrades to guarantee every value is perpetually contestable, plural, and repairable. Only living standards—never static codes—can secure justice for an open future.

Paul Falconer & ESA
Aug 14, 20255 min read
bottom of page