![]() |
| Kristina Ĺ ekrst |
The Illusion Engine began with a simple question: how could machines ever think? It quickly met a less simple one: how could I, with a foot in both philosophy and software engineering, make either side intelligible to the other? Engineers glaze over at metaphysics; philosophers glaze over at code. Somewhere between the two, confusion turned into fascination.
The book grew out of that mismatch, moving between deep technical dives – attention mechanisms, backpropagation, transformers – and philosophical puzzles about consciousness, intentionality, and meaning. It asks whether a machine that hallucinates might, in doing so, come closer to something like experience.
This question continues threads from my recent work: “Do Large Language Models Dream of Electric Fata Morganas?” (forthcoming in the Journal of Consciousness Studies), “Unjustified Untrue Beliefs: AI Hallucinations and Justification Logics,” and “The Chinese Chatroom: AI Hallucinations, Epistemology, and Cognition”. Each explores a different corner of the same problem – what it means for a system to appear as if it has a mind. The Fata Morganas paper argues that a sophisticated hallucination can be phenomenologically indistinguishable from a genuine mental state. The chatroom paper re-examines Searle’s argument with a new empirical context, and the epistemological piece asks whether our confidence in “understanding” these systems might itself be an illusion.
Interpretability and explainability should have helped. They have not. The field of explainable AI still lingers somewhere between aspiration and metaphor. We can trace some attention heads, label a few neurons, and visualize activation patterns that correlate with linguistic categories, but the causal picture remains opaque. Recent interpretability work has provided us with networks and attribution graphs whose inner logic we can partially decode – yet the general phenomenon remains mysterious. Why do models hallucinate at all? Why do they sometimes reason correctly and sometimes invent? We have patterns and partial answers, but no comprehensive theory.
![]() | |
| The Illusion Engine |
The Illusion Engine does not claim to answer that question. It tries instead to map the terrain where equations about gradient descent run into arguments about qualia. It suggests that the border between mind and mechanism may not be where we thought it was, and that the effort to explain away hallucinations might one day explain consciousness itself.

