Today's post is by Dan Williams, a PhD candidate in the Faculty of Philosophy, at the University of Cambridge.
If you had to bet on it, what’s the probability that your loved ones have been replaced by visually indistinguishable imposters? That your body is overrun with tiny parasites? That you’re dead? As strange as these possibilities are, each of them captures the content of well-known delusional beliefs: Capgras delusion, delusional parasitosis, and Cotard delusion respectively.
Delusional beliefs come in a wide variety of forms and arise from a comparably diverse range of underlying causes. One of the deepest challenges in the contemporary mind sciences is to explain them. Why do people form such delusions? And why on earth do they retain them in the face of seemingly overwhelming evidence against them?
My new paper “Hierarchical Bayesian Models of Delusion” presents a review and critique of a fascinating body of research in computational psychiatry that attempts to tackle these questions. To cut a long story short, that research suggests that delusions arise from dysfunctions in a process of hierarchical Bayesian inference.
To understand what this means, you have to grasp three ideas.
The first is the idea that information processing in the brain is hierarchical. It’s not always clear exactly how this claim should be understood (see below). The core idea, however, draws inspiration from the structure of sensory and motor areas of the neocortex, as well as the extraordinary success of deep (i.e. hierarchical) neural networks in machine learning.
For example, the ventral pathway of the visual cortex contains a cascade of functional areas (e.g. LGN-V1-V2-V4-IT) where low levels represent simple stimulus features such as edge-segments and basic colour and contrast information and higher levels process information at increasingly greater levels of spatiotemporal scale and abstraction, eventually reaching representations of things like faces and houses. Many neuroscientists extrapolate from such findings that all information processing in the neocortex is hierarchical, with perceptual experiences existing at lower hierarchical levels and beliefs existing at higher levels.
The second is the idea that information flow in this hierarchy is bi-directional: information flows up the hierarchy to higher levels, but it also flows back down from higher levels to primary sensory areas.
The third idea is that this process of bi-directional information processing implements a form of (approximate) Bayesian inference, combining prior expectations about the state and structure of the world carried via top-down connections with incoming sensory evidence in a statistically optimal way.
When all is going well, this process of hierarchical Bayesian inference is alleged to leverage the noisy and ambiguous sensory signals the brain receives to put us into contact with their worldly causes at multiple spatiotemporal scales. When the process malfunctions, however, the very properties that make this information processing regime effective at creating that contact with reality imbue it with the capacity to remove that contact in especially sinister ways.
That—in an extremely schematic nutshell—is the basic story of the emergence of psychosis in conditions such as schizophrenia advanced by the work in computational psychiatry that I criticise in my recent paper. The details are of course much more complex and nuanced than I can do justice to here. (See my article and the references therein for a review of those details and some compelling evidence in favour of this story).
In any case, the two challenges that I put forward for hierarchical Bayesian models of delusion are relatively straightforward.
First, I am sceptical that beliefs—delusional or otherwise—exist at the higher levels of a unified inferential hierarchy in the neocortex. Specifically, I think that every way of characterising this proposed hierarchy that I have seen in the literature is inadequate. For example, it can’t be true both that beliefs exist at the higher levels of the inferential hierarchy and that higher levels of the hierarchy represent phenomena at large spatiotemporal scales. There are no such content restrictions on beliefs, whether delusional or not. (Delusional parasitosis concerns tiny parasites). As far as I can tell, however, available ways of avoiding this problem are either inadequate or rely on an appeal to an understanding of “the” hierarchy so nebulous that it ceases to be interestingly hierarchical at all.
Second, I am sceptical that belief fixation is Bayesian. Hierarchical Bayesian models of delusion—and indeed similar models of conditions such as autism—model the brain as an approximately optimal statistical inference machine. I think that this story is plausible for our broadly perceptual and sensorimotor capacities, where most of the evidence for such models exists. I think that it is much less plausible for beliefs, however, which—in my reading of the empirical literature, at least—emerge from a complex stew of bias, suboptimality, motivated reasoning, self-deception, and social signalling. If ordinary belief fixation is not Bayesian, however, we shouldn’t try to explain delusions in terms of dysfunctions in a process of Bayesian inference.
*Thanks to Marcella Montagnese for helpful comments.