Skip to main content

Epistemically Useful False Beliefs



Duncan Pritchard (pictured above) is Professor of Philosophy at the University of Edinburgh and Director of the Eidyn research centre. In this post, he summarises his paper on epistemically useful false beliefs, which is forthcoming in a special issue of Philosophical Explorations on false but useful beliefs. The special issue is guest edited by Lisa Bortolotti and Ema Sullivan-Bissett and is inspired by project PERFECT's interests in belief.

It seems relatively uncontroversial that false beliefs can often be useful. For example, if one’s life depends on being able to jump that ravine, then it may be practically useful to have a false belief about how far one can jump. In particular, that one overestimates one’s jumping ability may well give one the confidence needed to wholeheartedly attempt the jump. Having an accurate conception of one’s abilities in this regard, in contrast, might lead one to falter, thereby consigning oneself to certain death (rather than possible escape). Moreover, notice that this utility needn’t be a one-off, in that one could imagine cases where it is systematically advantageous to have certain false beliefs (perhaps one occupies an environment where overestimating one’s abilities is regularly conducive to one’s survival).

The question that concerns me, however, is whether there is a philosophical significant class of false beliefs that are specifically epistemically useful. The reason why this is an interesting question is that it is part of the very nature of the epistemic that it is concerned with the promotion of truth and the avoidance of error. With that in mind, how could a false belief be epistemically useful?

We need to refine our question a little here, which is why I am focussing on whether there is a philosophically significant class of false beliefs that are epistemically useful. The reason for this is that there are clearly some uncontroversial cases where false beliefs are epistemically useful. In making a calculation, for example, having a false belief might help one to gain a correct result because it cancels out a previous error. What would be philosophically interesting, however, and potentially in tension with our traditional view of the nature of the epistemic, would be whether this epistemic utility could be sustained over the long-term. In particular, what we are interested in is whether false belief can ever be systematically epistemically useful.

In the paper I approach this question in a piecemeal fashion by considering a selection of cases which might look like plausible examples of false beliefs that are systematically epistemically useful. The first concerns the kinds of strictly false claims that are sometimes employed in scientific reasoning, such as appeals to idealisations (like the ideal gas law). I argue that when we look at these cases more closely, however, it isn’t credible that having a false belief specifically (as opposed to, say, accepting a false proposition) is systematically generating epistemic value.


The second concerns epistemic situationism. This argues that often what is generating our cognitive successes has more to do with incidental features of our environment rather than the operation of our cognitive abilities. The reason why this view is relevant for our purposes is that it opens up the possibility that we might have false beliefs about the nature of our cognition and yet this nonetheless be actually conducive to being a good cognisor. Again, I don’t think that such cases stand up to scrutiny. In particular, it is hard, on closer inspection, to determine a specific false belief on the part of the subject that is systemically leading to epistemic utility.



Finally, third, I look at the Wittgensteinian notion of a hinge commitment. These are held to be commitments that one is required to have if one is to be a rational subject at all, but which can never themselves be rationally grounded. A possible upshot of this idea is that having hinge commitments is epistemically useful even if they are false. The problem here, however, is that once we understand what is involved in the notion of a hinge commitment, then it becomes hard to take seriously that they are genuinely beliefs at all, at least as epistemologists normally understand this propositional attitude (which is, of course, the conception of belief that is relevant for our purposes).

Of course, that these three particular views fail to offer us bona fide philosophically interesting cases of epistemically useful false belief does not demonstrate that there can be no such cases. Such is the drawback of adopting a piecemeal approach to the issue. But I think that in examining these sample cases we nonetheless do gain a good basis for being at least highly sceptical about the possibility of such cases.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph