Skip to main content

A Reply to Dan Williams on Hierarchical Bayesian Models of Delusions

This post is a reply by Phil Corlett (Yale) (pictured below) to Dan Williams's recent post on Hierarchical Bayesian Models of Delusions.

Dan Williams has put forward a lucid and compelling critique of hierarchical Bayesian models of cognition and perception and, in particular, their application to delusions. I want to take the opportunity to respond to Dan’s two criticisms outlined so concisely on the blog (and in his excellent paper) and then comment on the paper more broadly.




Dan is “sceptical that beliefs—delusional or otherwise—exist at the higher levels of a unified inferential hierarchy in the neocortex.” He says, “every way of characterising this proposed hierarchy... is inadequate.”

Stating that “it can’t be true both that beliefs exist at the higher levels of the inferential hierarchy and that higher levels of the hierarchy represent phenomena at large spatiotemporal scales. There are no such content restrictions on beliefs, whether delusional or not. (Delusional parasitosis concerns tiny parasites).

I agree that ‘the’ hierarchy is thus far poorly specified, to the extent that it may even seem nebulous. The notion of hierarchy has to some extent been invoked as a sort of get out of jail free card when – for example - some priors appear to be weak in patients with delusions and others strong (e.g. the very elegant work from Philipp Sterzer, Katarina Schmaak and others in Schmaak et al. 2013, Stuke et al. 2018, and Stuke et al. 2017) and both effects correlate with delusions.

One way, within a hierarchical model, for this to make sense would be for the weak priors (often evinced as failures to perceive certain perceptual illusions) to generate prediction errors that must be reconciled. Such prediction errors create a state of perceptual hunger for priors (As Steve Dakin and Jerzy Konorski before him have speculated), which is only satisfied by imposing stronger (and perhaps inaccurate) higher level priors.

Hence the shift toward prior knowledge observed by Teufel, Fletcher and colleagues. This is what we mean by a hierarchy of prior beliefs. And it seems to relate importantly to psychotic symptoms and in particular delusions (although see their more recent work for data consistent with as well as a challenge for this idea of a hierarchy of priors and psychosis). What I don’t think we mean is that if delusions involve high-level prior beliefs, they necessarily have to entail only high-level concepts (or even large rather than small things as Dan suggests – this would indeed make parasitosis impossible).

I agree, we could be clearer. We will be in future publications. We are trying to characterize neural and psychological hierarchies in ongoing experiments in healthy and delusional subjects. One approach that seems to be bearing fruit is hierarchical computational modeling of behavior (Mathys et al. 2014), with which we have implicated priors and hierarchical organization in the genesis of hallucinations (Powers et al. 2017) – watch this space for similar with delusions.

Second, Dan is “sceptical that belief fixation is Bayesian”. I think Dan alludes to a solution to his skepticism in his own piece. None of these models demand optimally Bayesian inference. As Dan says, they involve “(approximate) Bayesian inference”. They entail inferences to the best explanation for a particular subject given their prior experiences and current sensory evidence. 

They explicitly allow for deviations from optimality. Those deviations can be different for different inferences and different people and those differences allow opportunities for the theory to applied to the myriad conditions to which it has been applied theoretically and empirically (with some success).

Addressing the second part of Dan’s second concern, can a Bayesian account explain “bias, suboptimality, motivated reasoning, self-deception, and social signaling”? These are important riffs on the first part of Dan’s second concern “Can biased beliefs be Bayesian?

My answer is yes. One can model the irrational spread of rumors in crowds in a Bayesian manner (Butts 1998). Partisan political biases and polarization can be predicted by Bayesian models (Bullock 2009). I’ve previously argued, with Sarah Fineberg, that motivated reasoning and self-deception in delusional individuals can fall under the umbrella of a Bayesian account.

The key move here (borrowed from Tali Sharot’s work on belief updating biases) is to factor in a degree of doxastic conservatism (championed in delusion models by Ryan McKay amongst others), that is, if one allows some value in consistency, in sustaining the status quo, then belief updating will be biased. Predictive processing based accounts have this value built in to them, since they abhor unreconciled prediction error and have myriad ways to minimize it (including ignoring the conflicting data which is the move when one is biased, motivated and self-deceiving).

I’d like to finish with a comment on explaining delusions in general. Unlike his blog post, Dan’s paper reads not only as a critique of hierarchical predictive models, but as somewhat of an apology for 2-factor theory. This is partly because those two model types have been adversaries for some time (as you can see in previous exchanges on this blog). 

They needn’t be. It could be that 2-factor and prediction error models are expressing similar ideas at different levels of abstraction. I don’t subscribe to this view. But some people do. Regardless, if one critiques PE models for a lack of clarity, for being vague with regards to their inner workings, one ought to level similar challenges to 2-factor theory, or indeed any explanation of delusions. 

The point here is not to attack 2-factor theory per se, but rather to recognize that explanations develop over time, through thought-experiments and real-world experiments. Some have been around longer than others. Some have more empirical support than others. Some have different explanatory ranges and scopes. It is important that we critically evaluate, compare and contrast all theories, if only to signpost the key areas for future inquiry and perhaps, ultimately, kill our darlings and approach a more complete explanation of delusions.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...