Skip to main content

Belief, Imagination, and Delusion

On 6th and 7th November, Ema Sullivan Bissett organised a conference on Belief, Imagination, and Delusion at the University of Birmingham. The PERFECT team attended the event and this report is the result of their collective effort!

Anna Ichino on imagination
Paul Noordhof on aim of belief

Sophie Archer (Cardiff University) started the conference with a discussion of delusion and belief, inviting us to learn some lessons from the implicit bias literature. When the avowed anti-racist says all races are equal but does not behave in ways consistent to this belief, then we assume that there is an additional mental state (not open to consciousness) that is responsible for those behaviours. Is this additional mental state a belief? Archer argues that it is not.

On the background, there is a thesis about belief. Even if a mental state responds directly to epistemic reasons, this is necessary but not sufficient for the mental state to be a belief (Epistemic Reasons). If the mental state can be directly formed or revised on the basis of conditioning, then it is not a belief (Conditioning). In the second part of her talk, Archer considered two objections to her conditions for belief. One is that the conditions are not really distinct, and the other is that the picture sketched is one that is not psychologically realistic.

Garry Young (Melbourne) addressed the revisionist model of the Capgras delusion in his talk: what causes the delusion, why the belief is accepted, and why it is maintained. Capgras is the delusion that a significant other has been replaced by an impostor. The traditional model of the delusion combines an anomalous experience that then is either explained by or endorsed as a belief. In a later revised model, the anomaly is not a conscious experience (we can call it abnormal data) and the belief is the first conscious component of the delusion.

One problem with the revised model is that by making the belief the first conscious step, it is not clear why the person does not talk about the acquired belief as an unbidden thought. In Capgras there is a mismatch between physical recognition (she looks like my wife) and lack of arousal (she doesn’t feel like my wife). So, the content of the delusions would be: “This person looks like my wife and claims to be my wife is not my wife”. This has a strong explanatory power because it makes sense of what the person experiences.

Another problem is that it is not clear what the second factor in the formation of the delusion is given that the reasoning is abductively good, providing an explanation for an unusual experience. This is highlighted by the fact that people who recover from the delusion seem to be able to check the delusion for plausibility, which makes it strange to say that previously the person lacked that capacity.

Young proposed that there are a co-occurrence and an interaction between the experience of unfamiliarity and the belief about the impostor. He describes his model as interactionist. He rejects the idea that what first enters consciousness is a fully formed belief. There is a more gradual process. For instance, the person could be considering the content of Capgras as a contender for the truth (appealing to indicative imagination). The belief has two functions, then: (a) it interprets the experience, but also (b) it gives meaning to the perceptual data. There is a mutual effect between belief and perception.

After the lunch break, it was Lucy O’Brien’s (UCL) turn. Her talk was entitled: “Delusions of Every Day Life: Death, Self-love, and Love of Another”, written with Doug Lavin. We have a sense of our own significance, a form of self love, which makes us think that our own death would be a terrible thing. When we are in the grip of self love, what can we reply to the objectivity response, that our life does not have the significance we attribute to it?

We can try and think about which property makes us value our own lives so much, but no property seems quite enough. It is more that we have lived this life with ourselves, a particular life with projects and achievements that we care about. Is this an in-built response, a kind of inevitable irrationality? Or is the response actually rational for agents like us? As in the self love case, also in the case of loving another, their death is a terrible thing for us. That is because we value them: (a) we value some character or stereotype that the loved person embodies; (b) we value the peculiarity and specificity of the loved person.

Another common ‘justification’ of love (and self love) is history: I love them because we have a history together. But also this attempt to justify assigning significance to others we love or to ourselves fails on the grounds of contingency and inefficiency (it just happens to be the case that this is the person I have lived with, another person might have pursued my projects better than I did). We could see self love and love for others as an a-rational drive but they are not so, because they do not work without the participation of our rational agency.

Ryan McKay (Royal Holloway) presented his paper next, entitled “Belief Formation in a Post-Truth World”. McKay started with a list of famous cases where people were made to believe in events that never occurred. What are the ramifications of fake news? For individuals, serious failures of judgements. But for society at large, even more serious outcomes (e.g. indifference to climate change, anti-vax propaganda). How do false beliefs get traction in human minds?

There are many factors affecting belief consumption: truth is not the only relevant consideration. In terms of content-specific reasons to adopt a belief, one reason is that a belief matches your experience. But experience can be misleading (in clinical cases, an unusual experience can generate a very implausible, delusional, belief).

Another factor affecting the adoption of beliefs is the presence of preferences for which there can be good evolutionary reasons. We assume that there is agency in everything we experience (e.g. supernatural beliefs or conspiracy theories) even if there is no agency. A third reason for adopting some beliefs for which there is no (good) evidence is the presence of motivational factors: McKay discussed the case of positive illusions in this context.

One very interesting finding McKay discussed in the context of explaining how we maintain positive illusions is the fact that more knowledge does not protect from desirability effects and excessively polarised positions in public debates. That is because we asymmetrically update our beliefs, taking on board evidence for desirable outcomes, but dismissing evidence for undesirable outcomes. Such biases in belief evaluation and belief revision underpin self-deception in deflationary accounts such as Mele’s.

There are also content-neutral factors (such as jumping to conclusion and other biases) why we adopt certain beliefs. One important issue is signalling: people use beliefs as identity markers, sending a strong signal as to what views you are committed to and which groups you belong to.

Anna Ichino (Milan) gave the final paper of the day, investigating how imagining differs from belief. She maintained that imaginings and beliefs differ only with respect to their cognitive inputs, or, in other words, that whilst beliefs are formed in response to real-world evidence, imaginings are not. She then argued against a position that has become standard in the imagination literature, defending the view that imaginings and beliefs do not differ as regards their behavioural outputs.

She took it that the relationship between beliefs and behavioural outputs is such that our beliefs cause us to act in ways that would promote the satisfaction of our desires, if they were true. Ichino then suggested that in many cases, imaginings cause us to act in ways that would promote the satisfaction of our desires, if they were true.

But, surely we can imagine all sorts of things, and not thereby act upon them? Acknowledging this thought, Ichino brought up O’Brien’s example: Lucy can imagine being an elephant, but this does not entail all of the behavioural outputs that believing that she was an elephant would promote (for instance, resigning from her job, and withdrawing from interactions because she is no longer able to comply with human social norms; buying a new bed because elephants do not fit in human beds… etc). But imaginings like these are not similarly integrated into a network of cognitions as the equivalent beliefs are. Ichino's challenge to us and to you: Find a case where an imagining does not motivate, where, all else being equal, a belief with the same content would. Can you think of one?

The second day of the conference started with a talk by Kathleen Stock (Sussex). The question guiding the talk was whether some delusions are also imaginings or involve imaginings.



Kathleen Stock on imagination and delusion

Jakob Ohlhorst on delusions as certainties


Stock reviewed Currie’s view of delusions. Currie makes a distinction between beliefs and propositional imaging. To imagine that something is the case and to believe that it is not would not be contradictory. In this sense, propositional images are isolated from other beliefs. Situations where a person does not believe that is imagining that p, but rather believes that she believes that p can be explained appealing to an inability to appropriately monitor one’s act of imagining.

The delusion here is an unrecognized propositional imagining which the agent falsely believes are beliefs. Stock argued that if delusions are imaginings in this sense, this may explain why they are maintained even after disconfirming evidence; and it may explain why people with delusions do not follow the consequences of their delusions or try to resolve the tension that those delusions may have with other beliefs.

After reviewing some objections, Stock went on to explore whether delusions involve autonomous imaging. She provided several reasons why this is not possible. Unlike autonomous imaging, delusions are not within the agent’s control; delusions are not ‘authored’ (the content is not under the agent’s control); and delusions are immersive (it is not easy to switch from content of imagining to thoughts of ‘real world’).

The question that followed was whether delusions involve prop-driven imagining. Stock answered this question positively and suggested that many delusions involve something like prop-driven imagining. She characterized prop-driven imagining as involving actions, being only indirectly initiated, not authored and not easy to ‘come on and off’. When we watch movies, for example, it is easy not to pay attention to what is happening in the real world. For example, we do not pay attention to the screen, or the sound of popcorn.

In this sense there is some sort of immersion involved in these cases. In line with Jasper’s concept of ‘delusional atmosphere’ or ‘mood’ as an overwhelming and indescribable transformation of the perceptual world that precedes the development of delusions, she argued that a “prop” is something continuous that cannot be switched off. As if we were forced to watch a film that we cannot switch off. It is in this sense, Stock argued, that the abnormal perceptual experience is the “prop”. This sort of explanation may provide an account of delusions that goes beyond their propositional attitudes and that better accounts for the ‘rich’ experiential nature of delusions (as described by Gallagher).

Stock then presented some possible challenges to proposing that prop-driven imagining is imagining and not belief and the last thought was that perhaps imagination resistance is involved.

Jakob Ohlhorst (Geneva) gave a talk on "The Certainties of Delusion". He provided an argument for defending that delusions are a kind of certainty. He started his talk by defining delusions. As presented in the ICD-11 (2018), delusions are epistemically and practically dysfunctional beliefs that are firmly held despite any evidence against them. Ohlhorst went on following Wittgenstein and defined certainties as deeply held beliefs that are constitutive of a world-view. As such these are beyond evidential confirmation or defeat.

He then argued that since both delusions and certainties are endorsed as true disregarding the evidence, this makes them both doxastic. Based on the premises that delusions are doxatic states, that they are not controlled by the evidence, that certainties are doxastic states that are not controlled by evidence and that apart from hinges, there is no further class of non-evidential doxastic states compatible with delusions, Ohlhorst argued that delusions are a kind of hinge.

Paul Noordhof (York) gave the next talk on ‘Consciousness and the Aim of Belief’. Many philosophers take it that whether to believe that pis settled by, and only by, determining whether pis true. But why? Noordhof considered whether belief (i) aims at truth; (ii) obeys some sort of truth norm; or (iii) has a (naturalistically given) function to be true; and then made trouble for all three notions. 

On (i), he argued that we can have pragmatic aims when engaged in doxastic deliberation, and so the truth aim is not exclusive; on (ii) he argued that norms only tells what we’re permitted to do, not what we must do, and again, that pragmatic norms can apply to belief formation which defeat the exclusivity of a truth norm; on (iii) he argued that it must be possible for entities with a particular function to malfunction, which generates a version of the exclusivity objection for functionalists themselves.

Noordhof then offered his own positive account, that of “attentive consciousness”, a feature of consciousness which motivates us to form beliefs on the basis of what is presented to us in our experiences (rather than, e.g., what is comfortable or practical for non-experiential reasons). He had us all looking at a white table in the corner of the room. Obligingly, the table presented itself to us in our attention, rendering it very difficult to believe that it was in fact blue. The weight of aims that come into play in self-deception are outweighed by this feature of consciousness.

An interesting upshot of the account presented was that subjects with monothematic delusions are more rational than people who are self-deceived, because the former have anomalous experiences, and, like all experience, attentive consciousness makes it attractive to believe that what is presented in experience is true. The self-deceived, however, fail to have the experiences which would provide the appropriate contents of their beliefs, and so those beliefs arrive via a different means. Self-deception would then seem to require a second factor of explanation, whilst delusion would not: delusion formation turns out to be more in line with standard belief-formation than self-deception on this account.

Matt Parrott (Birmingham) gave the final talk of the conference, in which he offered an account of why un-understandable delusions are as such. What would count as an un-understandable delusion in this sense? Parrott gave the example of believing that the world is going to end on the basis of seeing a particular arrangement of marble tables.

He first found fault with the idea that delusions like this are un-understandable because the person who has them is irrational, for three reasons: firstly, delusional subjects do not manifest general failures of reasoning; secondly, they manifest excellent reasoning in certain experimental contexts; and thirdly, many non-delusional beliefs are irrational or poorly grounded in reason, and yet we can still understand why they arise.

Matt Parrott on delusions

So, Parrott offered and alternative account that un-understandable delusions are such because they presuppose unimaginable inference principles. He first showed that suppositions are constrained by background inference principles, and that there are certain things that we cannot suppositionally imagine. Accordingly, to suppose that p you must have some way to discharge the supposition that p, and without some sense of what would count as discharging it, one cannot suppositionally imagine that p.


Can you suppose that the best explanation of the arrangement of marble tables is that the world is coming to an end? Would a different arrangement mean it wasn’t? The problem is that the content of these sorts of delusions does not supply which inferences are legitimate, and this feature underdetermines what counts as discharging the supposition. What makes a delusion un-understandable, therefore, is that we cannot (suppositionally) imagine a significantly different system of inference principles that would render the delusion as a plausible candidate explanation of the experience.

(We have not included a summary of Federico Bongiorno's talk on delusions as he will blog for us soon on that topic. Something to look forward to!)

This was a very rich and interesting conference with a welcoming and friendly atmosphere! Thanks to Ema for organising it so well.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph