Skip to main content

Explaining Delusional Beliefs: a Hybrid Model

In this post Kengo Miyazono (Hiroshima) and Ryan McKay (Royal Holloway) summarise their new paper “Explaining delusional beliefs: a hybrid model”, in which they present and defend a hybrid theory of the development of delusions that incorporates the central ideas of two influential (yet sometimes bitterly opposing) theoretical approaches to delusions—the two-factor theory and the prediction error theory. 



There are at least two influential candidates for a global theory of delusions (i.e., a theory that explains many kinds of delusions, rather than particular kinds of delusions such as persecutory delusions) in the recent literature: the two-factor theory (Coltheart, 2007; Coltheart, Menzies, & Sutton, 2010; Coltheart, Langdon, & McKay, 2011), according to which delusions are explained by two distinct neurocognitive factors with different explanatory roles, and the prediction error theory (Corlett et al., 2010; Corlett, Honey, & Fletcher, 2016; Fletcher & Frith, 2009), according to which delusions are explained by the disrupted processing of prediction errors (i.e., mismatches between expectations and actual inputs).

Which one is correct; the two-factor theory or the prediction error theory? Recent years have seen vigorous debates between the two camps. A recent example was this paper, “Factor one, familiarity and frontal cortex: a challenge to the two-factor theory of delusions”, in which Phil Corlett, one of the main figures in the prediction error camp, challenged some basic assumptions of the two-factor theoretic account of the Capgras delusion. Some of the discussions between the two camps have been hosted on this blog:

Our view, however, is that we do not have to choose one theory at the complete expense of the other. In fact, there are good reasons to seek a rapprochement between the two theories. For instance, the two-factor theory (as a general framework) tends to be rather agnostic about mechanistic details. By adopting some ideas from the prediction error theory camp, we might achieve a better understanding of the nature (and neurophysiological cause) of the second factor. Conversely, by adopting some ideas from the two-factor theory camp, we might better understand how alleged abnormalities in processing prediction errors manifest themselves at the psychological level of description.

We have previously argued that the two theories might not be irreconcilable alternatives (McKay, 2012; Miyazono, 2018; Miyazono, Bortolotti, & Broome, 2014). In support of our position, our new paper advances a particular hybrid theory of delusion formation, arguing that key contributions of the two theories can be combined in a powerful way.

According to the hybrid theory, the first/second factor distinction in the two-factor framework corresponds to a crucial distinction in the prediction error framework, namely, the distinction between prediction errors and their estimated precision. More precisely, we contend that the first factor (at the psychological level) is physically grounded in an abnormal prediction error (at the neurophysiological level), and the second factor (at the psychological level) is physically grounded in the overestimation of the precision of this abnormal prediction error (at the neurophysiological level). (Note: The “physical grounding” is a placeholder for whatever it is that relates psychological and neurophysiological levels of explanation.)

Here is how this theory applies to the Capgras delusion.

First Factor & Prediction Error: We follow the standard account in the two-factor theory camp that the first factor in the Capgras delusion is the abnormal datum about a familiar face. This abnormal datum is physically grounded in an abnormal prediction error; i.e., a mismatch between the expected and actual autonomic response to a familiar face (cf. Coltheart, 2010).

Second Factor & Estimated Precision: We adopt the hypothesis that the second factor is a “bias towards observational (or explanatory) adequacy” (“OA bias”); i.e., the tendency to form beliefs that accommodate perceptions, even where this entails adjustments to the existing web of belief (Stone & Young, 1997; McKay, 2012). The OA bias, we contend, is physically grounded in the overestimation of the precision of abnormal prediction errors (in which the first factor is physically grounded). When the precision of an abnormal prediction error is overestimated, the abnormal prediction error is prioritised over prior beliefs, and it drives bottom-up belief updating processes (cf. Adams et al., 2013; Fletcher & Frith, 2009). In effect, this is the OA bias.

This hybrid account can be easily generalised to many other delusions. In fact, this theory, because of its hybrid nature, has a wide scope of application. The two-factor theory provides a plausible account of a range of monothematic delusions that can arise due to neuropsychological deficits. In contrast, the prediction error theory provides a plausible account of delusions in schizophrenia. Our hybrid theory provides a unified explanation of both types of delusions.

Of course, the hybrid theory as it stands does not answer all questions about the process of delusion formation. For example, it is not clear how the hybrid theory accommodates a role for motivational factors in delusion formation (McKay, Langdon, & Coltheart, 2005). Relatedly, although the hybrid theory has a wide scope of application, it might not explain all delusions. A particularly difficult example would be anosognosia, which we may need a separate account of (for more on the hybrid theory and delusion in anosognosia, see Miyazono, 2018).

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph