Skip to main content

Bayesian Inference, Predictive Coding and Delusions


This is our third of a series of posts in the papers published in an issue of Avant on Delusions. Here Rick Adams summarises his paper (co-written with Harriet R. Brown and Karl J. Friston) 'Bayesian Inference, Predictive Coding and Delusions'.


I am in training to become a psychiatrist. I have also recently completed a PhD at UCL under Prof Karl Friston , a renowned computational neuroscientist. I am part of a new field known as Computational Psychiatry (CP). CP tries to explain how various phenomena in psychiatry could be understood in terms of brain computations (see also Corlett and Fletcher 2014, Montague et al., 2012, and Adams et al. forthcoming in JNNP).

One phenomenon that ought to be amenable to a computational understanding is the formation of both ‘normal’ beliefs (i.e. beliefs which are generally agreed to be reasonable) and delusions.

There are strong theoretical reasons to suppose that we (and other organisms) form beliefs in a Bayesian way. Thomas Bayes was an 18th century mathematician who tackled ‘inverse’ probability problems. ‘Direct’ probability is the probability of some data given their causes, e.g. a fair coin toss resulting in a ‘head’. Inverse probability is the opposite, e.g. the probability of a coin being fair, given a particular distribution of heads and tails. Bayes showed how to calculate the likely causes of data given the data and pre-existing beliefs (called ‘priors’) about the existence of those causes.

This inverse (now known as Bayesian) probability problem confronts all organisms with sensory systems: they collect sensory data and wish to infer the causes of those data. Sensory data are often extremely complex and noisy, and in this case appropriate prior beliefs are required to interpret them. ‘Beliefs’ used in this sense refer to probability distributions, not folk psychological statements.


Our prior beliefs are thought to take the form of a model of how the world causes sensory data. For example, knowing someone’s identity makes predictions about their configuration of facial features, which makes predictions about how colours and edges are distributed in a visual image of their face.

This model is hierarchical, as it has many levels of increasing complexity and abstraction as you move up from the raw sensory data to edges and colours, to individual features, to faces and to identities. Hierarchical models can use ‘predictive coding’ to predict low-level data by exploiting their high-level descriptions.
In predictive coding, a unit at a given hierarchical level sends messages to one or more units at lower levels which predict their activity; discrepancies between these predictions and the actual input are then passed back up the hierarchy in the form of prediction errors. These prediction errors revise the higher-level predictions, and this hierarchical message passing continues in an iterative fashion.

Exactly which predictions ought to be changed in order to explain away a given prediction error is a crucial question for hierarchical models. The Bayesian solution to this problem is to make the biggest updates to the level whose uncertainty is greatest relative to the incoming data at the level below (Mathys et al 2011), i.e., if you are very uncertain about a person’s identity, but their face is encoded with great precision, you ought to update your beliefs about their identity a lot.

But what of delusions? As we have seen, the certainty (precision) of beliefs at different levels is crucial to the inferences the hierarchical model makes. If the beliefs at the top of the model are very uncertain, they will be vulnerable to large updates on the basis of little sensory evidence. Many delusional ideas have this characteristic: e.g. a bus leaving its stop just as I arrive could make me infer that the driver deliberately drove off as he does not like me.

There are strong neurobiological reasons (highlighted in our paper) to suppose that in schizophrenia – a disorder in which delusions are a prominent feature – there is an imbalance in the encoding of precision away from the top of the brain’s hierarchical model (where prior beliefs are more concentrated) and towards the bottom (sensory) levels. There is also a lot of evidence that subjects with a diagnosis of schizophrenia make inferences that depend less on prior beliefs: they are more resistant to visual illusions, for example.

This hypothesis by no means offers a comprehensive account of all delusions, but it does generate testable hypotheses in both cognitive and neurobiological domains that can be refined by empirical data. Notice that the problem of inverse probability is one faced by science, as well as perception.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph