Skip to main content

Phil Corlett's response to Ryan McKay

In this post, Phil Corlett replies to Ryan McKay's summary of his paper "Measles, Magic and Misidentifications: A Defence of the Two-Factor Theory of Delusions", itself a response to Phil's earlier post on his paper "Factor one, familiarity and frontal cortex: a challenge to the two-factor theory of delusions". See also Amanda Barnier's important commentary on the debate, and Phil's reply. Got all that? Right, on with the post!


I am grateful to Ryan for his careful and collegial rebuttal of my critique. I am grateful too for the opportunity to respond.

Ryan’s response does mollify some of my points.

However, I am sure no one will be surprised that I have not updated my beliefs about 2-factor theory.

First, Ryan suggests that 2-factor theorists knew about the breadth of the deficits of vmPFC cases, since they were described in Langdon and Coltheart’s (2000) paper.

They were indeed described.

Why then, 19 years (and hundreds of theory papers) later, are 2-factor theorists still basing their arguments on these inappropriate cases?

Whether or not theorists are aware of the paucity of their data does not render those data any more or less appropriate.

If the vmPFC cases lack responses to ANY salient psychological stimulus, then they cannot logically serve as control for the specific face processing deficit in patients with Capgras delusion.

There may be other cases with more specific deficits, but they should be mentioned and invoked whenever the argument is unpacked and the four vmPFC cases ought not be mentioned.

Furthermore, the neural loci of the damage in these cases with more specific SCR deficits should be reported – this would be informative with regards to the overall structure of the account.

Next, Ryan alludes to my own work on DLPFC prediction error as a means to deflate my argument.

This provides an opportunity to request some much needed clarity from 2-factor theorists about their theory.

My work focuses on prediction error. We find it has a correlate in right DLPFC measured with fMRI. This has been replicated many times, directly and conceptually. We and also found an association between inappropriate rDLPFC prediction error and delusion severity (Corlett and Fletcher 2015; Corlett et al. 2007), as have others (Kaplan et al. 2016).

Ryan and others are often quick to claim that their model is consistent with prediction error theory and that two-factor theorists also associate prediction error and delusions.

I would like to know how exactly?

On this very blog (here) Max Coltheart has claimed that prediction error is the currency communicated from perception to belief evaluation – it is domain specific, and, as far as I followed Max’s argument – intact. Take Capgras, the prediction error here, as I understand it, is that the person I understood to be my wife now no longer feels familiar, this is surprising and might – if you also have a belief evaluation deficit – augur some delusion formation.

In our data from first episode psychosis patients, the association between aberrant prediction error in DLPFC is present across delusion contents (Corlett and Fletcher 2015; Corlett et al. 2007) – it is domain, or at least content, general. As we have learned more about prediction error signaling and learning, it seems that our DLPFC signal is perhaps better conceived of as an unsigned state prediction error (Gläscher, Daw, and Dayan 2010), a prediction error over possible models, and as such, somewhat more related to belief evaluation and updating. It is also very much not intact. It is happening in response to events it shouldn’t and, based on others’ data, models and theorizing too, I would suggest it has inappropriate precision (Adams et al. 2013).

This detour is to say, 2-factor theorists need to be clearer about their theory, especially when invoking my data. Am I measuring factor 1 or factor 2 when I assay prediction error in delusional patients with functional imaging? Is prediction error intact according to them? It is not according to prediction error theory.

Now, does 2-factor theory stand or fall based on rDLPFC location? My response is the same as to point 1: No - it does not fall completely, but 2-factor theorists do need to get serious about where belief evaluation is computed, how, and when. They need to stop mentioning rDLPFC as a possible locus (given that half of the vmPFC controls also have DLPFC damage). And, I think Ryan might need to concede that the limited data that have been acquired by 2-factor theorists have pointed at the rDLPFC (e.g. the hypnotizablity TMS study (Coltheart et al. 2018)), and are undermined by these revelations about the vmPFC cases.

I think this pointing – towards an (r)(D)LPFC encapsulated belief evaluation module - is wrong.

Some data could convince me otherwise.

For example, I would be delighted to conduct an adversarial collaboration with any 2-factor theorist with access to patients who have monothematic delusions consequent to neural damage. I would like to see how they behave on the causal learning task, as well as a number of other belief updating tasks we use in my lab. We can now, on the basis of behavioral data alone, infer the magnitude, timing and precision of prediction error signaling using computational modeling of behavioral time series. Let’s compare Capgras patient performance and prediction error to that observed in patients with endogenous psychotic illnesses and controls along the continuum of delusional beliefs. We could even pre-register our predictions. Maybe that would be an opportunity for 2-factor theorists to get a little more formal about factors 1 and 2?

Finally, I am curious why Ryan did not address the other two concerns in my paper (on the blog or in his published piece)?

Why are there no data testing the predictions about other monothematic delusions (Fregoli, Cotard)? Whither their control cases? These seem rather straightforward predictions to test and model features to fill in and I would imagine they could/should have been in the last 20 years.

What of the penetrability of perception? Do 2-factor theorists concede that if beliefs can alter perception (with relevance to delusions, as demonstrated by Katharina Schmack amongst others (Schmack et al. 2017)), then the uni-directional flow of information form perception to belief does not obtain? That would seem like a bigger problem for 2-factor theory. To me it obviates the need for two independent dissociable factors. Again, I appeal for some clarity here. 2-factor theorists have claimed that if perception doesn’t penetrate belief that is a problem for predictive coding theory. They have not said much about their own commitments. I argued in my paper that 2-factor theory suffers if perception can be penetrated by belief, because delusions might then arise from damage to belief evaluation processes alone. I contend that if cognition does modulate perception (and it does, sometimes), that is a problem for 2-factor theory.

These differing predictions and commitments to mental/neural organization make the prospects of a unifying theory of delusions that reconciles prediction error theories with 2-factor theory seem rather slight – to me at least.

I appreciate that I am an acolyte of my own theory. I have, as people say (in questionable taste), drunk the prediction error cool-aid. I am probably largely responsible for mixing it. So, I am grateful for the opportunity to have this exchange with Ryan. I hope our readers might be better informed about our positions and disagreements, even if we don’t change each other’s beliefs.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph