Skip to main content

Explaining Delusions (3)

Phil Corlett
This is a response to Max Coltheart's contribution to the blog, posted on behalf of Phil Corlett.

Thank you Max. Your responses are enlightening. I do have a number of follow-up questions, if I may.

Follow up to Q 1 – If prediction error is intact in people with delusions, how would we observe the patterns of prediction error disruption in our data? These patterns have been consistent across endogenous (Corlett et al., 2007) and drug induced delusions (Corlett et al., 2006) as well as in healthy people with delusion-like ideas (beliefs in telekinesis for example) (Corlett & Fletcher, 2012). Importantly, these neural responses (aberrant prediction errors) correlated with delusion severity across subjects in these studies.
On the other hand, if prediction error must be intact for 2-factor theory, do our data suggest that 2-factor theory not apply to delusions that occur in schizophreniform illnesses?

I know that you have written on this topic, but it seems that there is more evidence for a more broad-ranging deficit in belief evaluation in psychotic illness; hence the bizarre polythematic delusions that some patients experience. This broader deficit is more consistent with an aberrant prediction error explanation. Perhaps the genesis of candidate explanation is also disrupted in schizophrenia? Stephan Moritz and Todd Woodward’s work (Moritz, Woodward, & Lambert, 2007) as well as some of our own (Corlett et al., 2009) would suggest that psychosis might be associated with a more liberal acceptance bias – they may entertain more unlikely explanations for events.

Follow-up to Q 2 – According to your answer, prediction error is neither factor 1 nor factor 2 but the interaction between them? And this interaction is normal?

This requirement for normal prediction error seems to depart slightly from other Bayesian interactionist models (Young, 2008) which have more readily embraced prediction error disruption as a mechanisms through which beliefs and experiences interact – and how that interaction may be disrupted in people with delusions.

Follow-up to Q 3 – According to this explanation, belief formation is intact in your model – the imposter hypothesis is appropriate given the perceptual experience, and that candidate is adopted and then never updated? Is that correct?

Do the patients with perceptual disruption but no delusions (critical to 2-factor theory) have normal genesis of candidate explanations also? So the imposter explanation crosses their mind but they discount it?

Unless pure factor 1 patients entertain and then dismiss the imposter explanation, it seems that this third factor also needs to be disrupted for delusions to arise.

If not, wouldn’t that mean the perceptual disruption has to be different between those who form delusions and those who don’t – which seems problematic for the theory.

Have you applied your careful neuropsychological analysis to this new third factor?

Are there people who have impaired candidate genesis but normal evaluation?

Could candidate evaluation disruption be involved in the pathogenesis of bizarre delusions in schizophrenia as I suggest above?

The prediction error to candidate genesis part of the account (as in your abductive inference paper) sounds a great deal like our aberrant prediction error account.

Follow-up to Q 4 – The idea that belief evaluation is not completely abolished but rather impaired is interesting. Could this impairment be premorbid? Perhaps the people who were most schizotypal before their head injury are the ones more likely to form delusions based on their Factor 1 damage? There is some evidence in support of this notion (Feinberg, 2005). Doesn’t this weakening rather than failure suggest a single factor explanation? And doesn’t that single factor sound a lot like a prediction error disruption (surprising unexpected experiences)?

I don’t think this idea of weakness rather than breakage answers my question though. Take Peter Halligan’s recent work on odd-beliefs in the general population for example people who entertained one odd belief were much more likely to endorse other unrelated odd beliefs. So why just one delusion, if belief evaluation is weak or damaged? Why aren’t other beliefs altered too? Or do we just lack the data?

I wonder, has anyone ever gathered any data from these patients on their cognitive performance or the other beliefs they endorse? And how that endorsement may have changed after their injury? Aside from the study on premorbid ego function (Feinberg, 2005), I am not sure those data exist.

Follow-up to Q 5 – I am glad that 2-factor theory allows for a top-down influence of belief on perception. This would allow for a deficit in Factor 2 (belief evaluation) to create perceptual changes. Again, this would be a one-factor explanation. That one factor would be a disruption in the balance between top-down belief and bottom-up sensation – or prediction error, in a predictive coding account. There are facts we don’t have about monothematic delusions (like how individuals with these delusions form and hold new beliefs, whether other beliefs change as a result of their damage, whether their beliefs were different before their delusions. These facts would be useful to have in building and evaluating our explanatory theories.

References

Corlett, P. R., & Fletcher, P. C. (2012). The neurobiology of schizotypy: Fronto-striatal prediction error signal correlates with delusion-like beliefs in healthy people. Neuropsychologia, 50(14), 3612-3620.

Corlett, P. R., Honey, G. D., Aitken, M. R., Dickinson, A., Shanks, D. R., Absalom, A. R., et al. (2006). Frontal responses during learning predict vulnerability to the psychotogenic effects of ketamine: linking cognition, brain activity, and psychosis. Arch Gen Psychiatry, 63(6), 611-621.

Corlett, P. R., Murray, G. K., Honey, G. D., Aitken, M. R., Shanks, D. R., Robbins, T. W., et al. (2007). Disrupted prediction-error signal in psychosis: evidence for an associative account of delusions. Brain, 130(Pt 9), 2387-2400.

Corlett, P. R., Simons, J. S., Pigott, J. S., Gardner, J. M., Murray, G. K., Krystal, J. H., et al. (2009). Illusions and delusions: relating experimentally-induced false memories to anomalous experiences and ideas. Front Behav Neurosci, 3, 53.

Feinberg, T. E., Deluca, J., Giacino, J.T., Roane, D.M., and Solms, M. (2005). Right hemisphere pathology and the self: Delusional Misidentification and Reduplication. In T. E. Feinberg, Keenan, J.P. (Ed.), The Lost Self: Pathologies of the Brain and Identity (pp. 100-130). New York: Oxford.

Moritz, S., Woodward, T. S., & Lambert, M. (2007). Under what circumstances do patients with schizophrenia jump to conclusions? A liberal acceptance account. Br J Clin Psychol, 46(Pt 2), 127-137.

Young, G. (2008). Capgras delusion: an interactionist model. Conscious Cogn, 17(3), 863-876.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph