Skip to main content

Hierarchical Bayesian Models of Delusion


Today's post is by Dan Williams, a PhD candidate in the Faculty of Philosophy, at the University of Cambridge.



If you had to bet on it, what’s the probability that your loved ones have been replaced by visually indistinguishable imposters? That your body is overrun with tiny parasites? That you’re dead? As strange as these possibilities are, each of them captures the content of well-known delusional beliefs: Capgras delusion, delusional parasitosis, and Cotard delusion respectively.

Delusional beliefs come in a wide variety of forms and arise from a comparably diverse range of underlying causes. One of the deepest challenges in the contemporary mind sciences is to explain them. Why do people form such delusions? And why on earth do they retain them in the face of seemingly overwhelming evidence against them?

My new paper “Hierarchical Bayesian Models of Delusion” presents a review and critique of a fascinating body of research in computational psychiatry that attempts to tackle these questions. To cut a long story short, that research suggests that delusions arise from dysfunctions in a process of hierarchical Bayesian inference.

To understand what this means, you have to grasp three ideas.

The first is the idea that information processing in the brain is hierarchical. It’s not always clear exactly how this claim should be understood (see below). The core idea, however, draws inspiration from the structure of sensory and motor areas of the neocortex, as well as the extraordinary success of deep (i.e. hierarchical) neural networks in machine learning.

For example, the ventral pathway of the visual cortex contains a cascade of functional areas (e.g. LGN-V1-V2-V4-IT) where low levels represent simple stimulus features such as edge-segments and basic colour and contrast information and higher levels process information at increasingly greater levels of spatiotemporal scale and abstraction, eventually reaching representations of things like faces and houses. Many neuroscientists extrapolate from such findings that all information processing in the neocortex is hierarchical, with perceptual experiences existing at lower hierarchical levels and beliefs existing at higher levels.

The second is the idea that information flow in this hierarchy is bi-directional: information flows up the hierarchy to higher levels, but it also flows back down from higher levels to primary sensory areas.

The third idea is that this process of bi-directional information processing implements a form of (approximate) Bayesian inference, combining prior expectations about the state and structure of the world carried via top-down connections with incoming sensory evidence in a statistically optimal way.

When all is going well, this process of hierarchical Bayesian inference is alleged to leverage the noisy and ambiguous sensory signals the brain receives to put us into contact with their worldly causes at multiple spatiotemporal scales. When the process malfunctions, however, the very properties that make this information processing regime effective at creating that contact with reality imbue it with the capacity to remove that contact in especially sinister ways.

That—in an extremely schematic nutshell—is the basic story of the emergence of psychosis in conditions such as schizophrenia advanced by the work in computational psychiatry that I criticise in my recent paper. The details are of course much more complex and nuanced than I can do justice to here. (See my article and the references therein for a review of those details and some compelling evidence in favour of this story).

In any case, the two challenges that I put forward for hierarchical Bayesian models of delusion are relatively straightforward.

First, I am sceptical that beliefs—delusional or otherwise—exist at the higher levels of a unified inferential hierarchy in the neocortex. Specifically, I think that every way of characterising this proposed hierarchy that I have seen in the literature is inadequate. For example, it can’t be true both that beliefs exist at the higher levels of the inferential hierarchy and that higher levels of the hierarchy represent phenomena at large spatiotemporal scales. There are no such content restrictions on beliefs, whether delusional or not. (Delusional parasitosis concerns tiny parasites). As far as I can tell, however, available ways of avoiding this problem are either inadequate or rely on an appeal to an understanding of “the” hierarchy so nebulous that it ceases to be interestingly hierarchical at all.

Second, I am sceptical that belief fixation is Bayesian. Hierarchical Bayesian models of delusion—and indeed similar models of conditions such as autism—model the brain as an approximately optimal statistical inference machine. I think that this story is plausible for our broadly perceptual and sensorimotor capacities, where most of the evidence for such models exists. I think that it is much less plausible for beliefs, however, which—in my reading of the empirical literature, at least—emerge from a complex stew of bias, suboptimality, motivated reasoning, self-deception, and social signalling. If ordinary belief fixation is not Bayesian, however, we shouldn’t try to explain delusions in terms of dysfunctions in a process of Bayesian inference.

*Thanks to Marcella Montagnese for helpful comments.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...