This post is a reply by Phil Corlett (Yale) (pictured below) to Dan Williams's recent post on Hierarchical Bayesian Models of Delusions.
Dan Williams has put forward a lucid and compelling critique of hierarchical Bayesian models of cognition and perception and, in particular, their application to delusions. I want to take the opportunity to respond to Dan’s two criticisms outlined so concisely on the blog (and in his excellent paper) and then comment on the paper more broadly.
Dan is “sceptical that beliefs—delusional or otherwise—exist at the higher levels of a unified inferential hierarchy in the neocortex.” He says, “every way of characterising this proposed hierarchy... is inadequate.”
Stating that “it can’t be true both that beliefs exist at the higher levels of the inferential hierarchy and that higher levels of the hierarchy represent phenomena at large spatiotemporal scales. There are no such content restrictions on beliefs, whether delusional or not. (Delusional parasitosis concerns tiny parasites).”
I agree that ‘the’ hierarchy is thus far poorly specified, to the extent that it may even seem nebulous. The notion of hierarchy has to some extent been invoked as a sort of get out of jail free card when – for example - some priors appear to be weak in patients with delusions and others strong (e.g. the very elegant work from Philipp Sterzer, Katarina Schmaak and others in Schmaak et al. 2013, Stuke et al. 2018, and Stuke et al. 2017) and both effects correlate with delusions.
One way, within a hierarchical model, for this to make sense would be for the weak priors (often evinced as failures to perceive certain perceptual illusions) to generate prediction errors that must be reconciled. Such prediction errors create a state of perceptual hunger for priors (As Steve Dakin and Jerzy Konorski before him have speculated), which is only satisfied by imposing stronger (and perhaps inaccurate) higher level priors.
Hence the shift toward prior knowledge observed by Teufel, Fletcher and colleagues. This is what we mean by a hierarchy of prior beliefs. And it seems to relate importantly to psychotic symptoms and in particular delusions (although see their more recent work for data consistent with as well as a challenge for this idea of a hierarchy of priors and psychosis). What I don’t think we mean is that if delusions involve high-level prior beliefs, they necessarily have to entail only high-level concepts (or even large rather than small things as Dan suggests – this would indeed make parasitosis impossible).
I agree, we could be clearer. We will be in future publications. We are trying to characterize neural and psychological hierarchies in ongoing experiments in healthy and delusional subjects. One approach that seems to be bearing fruit is hierarchical computational modeling of behavior (Mathys et al. 2014), with which we have implicated priors and hierarchical organization in the genesis of hallucinations (Powers et al. 2017) – watch this space for similar with delusions.
Second, Dan is “sceptical that belief fixation is Bayesian”. I think Dan alludes to a solution to his skepticism in his own piece. None of these models demand optimally Bayesian inference. As Dan says, they involve “(approximate) Bayesian inference”. They entail inferences to the best explanation for a particular subject given their prior experiences and current sensory evidence.
Dan Williams has put forward a lucid and compelling critique of hierarchical Bayesian models of cognition and perception and, in particular, their application to delusions. I want to take the opportunity to respond to Dan’s two criticisms outlined so concisely on the blog (and in his excellent paper) and then comment on the paper more broadly.
Stating that “it can’t be true both that beliefs exist at the higher levels of the inferential hierarchy and that higher levels of the hierarchy represent phenomena at large spatiotemporal scales. There are no such content restrictions on beliefs, whether delusional or not. (Delusional parasitosis concerns tiny parasites).”
Hence the shift toward prior knowledge observed by Teufel, Fletcher and colleagues. This is what we mean by a hierarchy of prior beliefs. And it seems to relate importantly to psychotic symptoms and in particular delusions (although see their more recent work for data consistent with as well as a challenge for this idea of a hierarchy of priors and psychosis). What I don’t think we mean is that if delusions involve high-level prior beliefs, they necessarily have to entail only high-level concepts (or even large rather than small things as Dan suggests – this would indeed make parasitosis impossible).
I agree, we could be clearer. We will be in future publications. We are trying to characterize neural and psychological hierarchies in ongoing experiments in healthy and delusional subjects. One approach that seems to be bearing fruit is hierarchical computational modeling of behavior (Mathys et al. 2014), with which we have implicated priors and hierarchical organization in the genesis of hallucinations (Powers et al. 2017) – watch this space for similar with delusions.
They explicitly allow for deviations from optimality. Those deviations can be different for different inferences and different people and those differences allow opportunities for the theory to applied to the myriad conditions to which it has been applied theoretically and empirically (with some success).
Addressing the second part of Dan’s second concern, can a Bayesian account explain “bias, suboptimality, motivated reasoning, self-deception, and social signaling”? These are important riffs on the first part of Dan’s second concern “Can biased beliefs be Bayesian?”
My answer is yes. One can model the irrational spread of rumors in crowds in a Bayesian manner (Butts 1998). Partisan political biases and polarization can be predicted by Bayesian models (Bullock 2009). I’ve previously argued, with Sarah Fineberg, that motivated reasoning and self-deception in delusional individuals can fall under the umbrella of a Bayesian account.
The key move here (borrowed from Tali Sharot’s work on belief updating biases) is to factor in a degree of doxastic conservatism (championed in delusion models by Ryan McKay amongst others), that is, if one allows some value in consistency, in sustaining the status quo, then belief updating will be biased. Predictive processing based accounts have this value built in to them, since they abhor unreconciled prediction error and have myriad ways to minimize it (including ignoring the conflicting data which is the move when one is biased, motivated and self-deceiving).
I’d like to finish with a comment on explaining delusions in general. Unlike his blog post, Dan’s paper reads not only as a critique of hierarchical predictive models, but as somewhat of an apology for 2-factor theory. This is partly because those two model types have been adversaries for some time (as you can see in previous exchanges on this blog).
The key move here (borrowed from Tali Sharot’s work on belief updating biases) is to factor in a degree of doxastic conservatism (championed in delusion models by Ryan McKay amongst others), that is, if one allows some value in consistency, in sustaining the status quo, then belief updating will be biased. Predictive processing based accounts have this value built in to them, since they abhor unreconciled prediction error and have myriad ways to minimize it (including ignoring the conflicting data which is the move when one is biased, motivated and self-deceiving).
They needn’t be. It could be that 2-factor and prediction error models are expressing similar ideas at different levels of abstraction. I don’t subscribe to this view. But some people do. Regardless, if one critiques PE models for a lack of clarity, for being vague with regards to their inner workings, one ought to level similar challenges to 2-factor theory, or indeed any explanation of delusions.
The point here is not to attack 2-factor theory per se, but rather to recognize that explanations develop over time, through thought-experiments and real-world experiments. Some have been around longer than others. Some have more empirical support than others. Some have different explanatory ranges and scopes. It is important that we critically evaluate, compare and contrast all theories, if only to signpost the key areas for future inquiry and perhaps, ultimately, kill our darlings and approach a more complete explanation of delusions.