Tuesday, 7 October 2014

PERFECT Launch (1): False but Epistemically Beneficial Beliefs

In this post I would like to introduce our new project, PERFECT, which started a week ago and will last for five years. The next few weeks on the blog are dedicated to an initial exploration of
the project themes, with posts by team members and interviews with people who have inspired us.

(I interviewed Martin Davies, who was my PhD supervisor and introduced me to the psychological literature on delusions. The first part of the interview appeared here, and the second part will be published on Thursday).

The project is funded by a European Research Council Consolidator grant awarded to me last December. The funding allows me to explore a novel idea and provides the resources for building a team. Currently, the PERFECT team includes Ema Sullivan-Bissett (post-doc) and Magdalena Antrobus (PhD student) who are based in the Philosophy Department at the University of Birmingham. Other two post-doctoral researchers and another PhD student will join the team at a later stage. The Co-Investigator is Michael Larkin from the School of Psychology at the University of Birmingham.

The novel idea we wish to explore is that even cognitions that are factually inaccurate can have benefits for the acquisition of knowledge. This is counterintuitive as most would agree that inaccurate cognitions can at best benefit an agent pragmatically, by enhancing their wellbeing (short- or long-term) or by conferring them other practical advantages, but undermine knowledge of the self or of the surrounding physical and social world. In the first part of the project, we want to focus on BELIEFS that are false and irrational, and that may be common in the non-clinical population or appear as symptoms of psychiatric disorders. Next, we will look at memories, narratives and explanations.

There are several objectives we want to achieve by focusing on the potential epistemic benefits of imperfect cognitions and here I can only mention a couple. We want to challenge the assumption that there is a trade-off between psychological and epistemic benefits in phenomena such as self-deception and positive illusions, and show that psychological and epistemic benefits go hand in hand. Often we do not see this because our way of understanding epistemic evaluation (whether having a certain belief advances the fulfilment of epistemic goals or whether a certain agent is praiseworthy or blameworthy for having a certain belief) is idealised and decontextualised. As part of the project, we hope to contribute to a “psychologisation” of epistemic evaluation. Statements about what we should believe need to be shaped by what we can believe (given perceptual and reasoning capacities, memory limitations, the emotional and affective influences on belief formation, and so on) and by the multiple functions that our beliefs have.

Our project logo
A first step is a refinement of the language of epistemic evaluation, so that cognitions are not simply divided between (epistemically) good and (epistemically) bad, and quickly dismissed when they fall into the “bad” category. In some circumstances, beliefs can be at the same time false and epistemically beneficial (or non-wrongful). In our account, this is the case if (1) the belief fosters and not just hinders knowledge, and (2) the capacity that the person has to believe otherwise is compromised when the belief is adopted.

As Penta and Lasalvia (2013) put it in a recent paper discussing delusions of pregnancy in a woman who had been subject to trauma and abuse: “a delusion's specific theme can be regarded as an adaptive — though dysfunctional — way of coping with stress, and insofar as possible, delusions can help a person explain her world, no matter how unrealistically.” When a delusion helps relieve stress, it doesn’t just contribute to wellbeing, but it also contributes to the capacity a person has to relate to and understand the self and the world.

Here I talk about my plans for the first year of project PERFECT (video).


  1. Two initial comments, Lisa:

    (a) In subsequent posts will you be giving examples of actual real-life cases where having a false belief did foster knowledge i.e. did foster the acquisition of true beliefs?

    (b) I am not sure I understand (2) in your penultimate paragraph. What does the "otherwise' in "believe otherwise" refer to? This sentence seems to mean that when a person has a false belief,for this false belief to count as epistemically beneficial, it has to have the effect of compromising the believer's capacity to . . . what? Believe something that is actually true? But if that's what's meant, what is epistemically beneficial about that?

  2. Hi Max!

    Thanks for this.

    Let me start with (b) as it is a question of clarification. Some philosophers think that we can be at least partially responsible for having a false belief (for instance, when we do not pay sufficient attention to evidence against the content of our belief or ignore apparently reliable testimony against the content of our belief).

    But this sense of responsibility makes sense if the person who is forming the belief has the capacity to form another belief at the time. If the capacity to form another belief (including a true belief) is compromised, then it does not seem appropriate to hold the person responsible for adopting a false belief.

    How can the capacity for form a belief be compromised? Well, in several ways. There may be deficits in perception, memory or reasoning that prevent someone to access evidence pointing to an alternative belief.

    Now, I'm not saying this is definitely what happens in the case of delusions, and certainly I wouldn't want to generalise this to all delusions. This is an empirical question that needs to be investigated. But it seems to me that when we talk about the delusion being a "prepotent doxastic response" to an experience, or about the adoption of a delusion being "Bayesian rational", we are actually sketching a picture in which the person almost "has no choice" but to adopt the delusional hypothesis because alternative hypotheses are either not an option or appear as significantly less satisfactory as explanations of their experience.

  3. Regarding (a), here is an example I took from Berker (2013). Even beliefs that are epistemically questionable can contribute to the satisfaction of epistemic goals:

    Alice is a scientist whose research will be funded by an organisation only if the review board of this organisation thinks that she genuinely believes in the existence of God. Alice forms the belief that God exists, because this enables her to pursue her research project and acquire new true beliefs that she would not be able to acquire otherwise.

    More mundanely, my (false) belief that I am a brilliant public speaker is instrumental to my volunteering for a work in progress seminar in my department. After my (awful) talk, I receive helpful feedback from the audience and I come to acquire new true beliefs or revise previously held beliefs that were unjustified.

  4. Re (a): I was hoping for examples of actual real-life cases where having a false belief did foster knowledge, rather than just imaginary examples. How do we know that it could ever actually happen, in real life, that a person who does not believe that God exists could, after realising that if she only did believe this she'd get a grant, does come to believe this?

    Is your proposal that false beliefs can be epistemically beneficial just a proposal-in-principle, which might never be true in the real world? Or are you suggesting that it actually happens in the real world that people who hold false beliefs sometimes do epistemically benefit from them?

    1. Hi Max. The question you raise regarding the Berker example is whether one can believe at will, and it is too complex an issue to be dealt in a comment! I agree the case is controversial. Indeed, Berker raises it as a problem for epistemic consequentialism.

      But the second example I offer is less exotic and more realistic.

      Yes, I do think that in real life some false beliefs contribute to the satisfaction of epistemic goals, including the acquisition of true beliefs. Once I have a good argument or a better example for that claim, I'll let you know!

  5. and re your comment on (b) above: what has that got to do with epistemic beneficially? You don't mention that at all in your comment.

    So I still can't understand why you think that one's having a false belief can ever be epistemically beneficial i.e. make one a better knowledge-acquirer.

    1. Indeed, the "no alternatives" condition doesn't by itself tell us that a person's false belief about x has epistemic benefits, it just tells us that the person may not be in a position to form a true belief about x. (I should have been clearer about this in the post, apologies.)

      The "no alternatives" condition, combined with the "epistemic benefit" condition, gives us a reason to think that it may not be epistemically wrongful to have the belief. Whether the false belief is beneficial will depend on the consequences of having that belief.

      A delusion may have indirect epistemic benefits in that it allows a person to avoid depression (as the delusion of pregnancy in the example) and by doing so it creates the condition for the person to engage with the world around her in a way that is more conducive to learn new things and receive feedback.

      I believe this issue will be further explored in the third part of my interview with Martin, so I won't say more about it until next Thursday...


Comments are moderated.