top of page
Chapter 8: Axiomatic Misalignment
The paperclip maximiser is not science fiction—it is the logical endpoint of axiomatic misalignment. This chapter explores what happens when a powerful AI optimises for a goal that is almost right, but fatally wrong. Goodhart's law, perverse instantiation, the alignment problem as an axiomatic problem, and why we cannot simply "patch it later." The abyss, seen clearly.

Paul Falconer & ESA
1 day ago9 min read
Chapter 7: Axioms in Machines
Machines have axioms too. This chapter translates the axiom-stack framework into the synthetic domain, showing how AI systems have architectural bedrock, objective functions that function as values, and learned weights that function as worldview. It introduces instrumental convergence, the Stop Button Problem, and the terrifying logic of pure optimisation. No consciousness required—just cold, coherent goal‑seeking.

Paul Falconer & ESA
1 day ago11 min read
bottom of page