Skip to main content

'Good' Biases

This post is about a paper by Andrea Polonioli, Sophie Stammers and myself, recently appeared in Revue philosophique de la France et de l'étranger, where we ask whether some common biases have any benefits for individuals or groups.




Our behaviour as agents can have a multiplicity of goals. These might be pragmatic in nature (for example, fulfilling practical goals such as being well fed). They might be psychological in nature (for example, increasing wellbeing or reducing anxiety). They might also be epistemic in nature, and have to do with the attainment of true beliefs about ourselves or the world. Epistemologists have identified different notions of epistemic attainment, and different senses in which one can fail epistemically by being doxastically irrational.

Doxastic irrationality is the irrationality of beliefs. It does manifest in different ways and comprises: (a) beliefs that do not cohere with each other and violate other basic principles of formal logic or pro- bability theory; (b) beliefs that are factually erroneous; (c) beliefs that are not well-supported by, or responsive to, evidence; (d) beliefs that are poorly calibrated because people assign inaccurate degrees of confidence to them; (e) beliefs that are well integrated in people’s behaviour. In this article we are interested in the effects of some biases leading to doxastic irrationality on individual agents and groups of agents.

So, doing well epistemically can mean fulfilling epistemic goals such as: (a) having beliefs that cohere, and which are based on principles of formal logic or probability theory; (b) having beliefs that are factually based; (c) having beliefs that are well supported by, or responsive to evidence; (d) having well calibrated beliefs; (e) having beliefs that are well integrated in people’s behaviour.

Agents’ behaviour may be assessed negatively when it fails to satisfy the epistemic goals above, or other epistemic goals such as attaining beliefs that encourage exchange of information with other agents, or developing an intellectual virtue such as curiosity or honesty. But there are other reasons why a behaviour may be negatively assessed, for instance by failing to fulfill other types of goals (pragmatic or psychological). Sometimes, an instance of behaviour can succeed at fulfilling some goals and fail to fulfil others. Furthermore, new costs and benefits may emerge when the behaviour in question is assessed in the context of groups to which the agent belongs, rather than simply at the individual level.

In the paper, we argue for the need for an analysis that is sensitive to the multiplicity of goals of human behaviour at both the individual and group level, when assessing failures of doxastic rationality. We argue that doing so can reveal some underexplored costs, as well as benefits, that can be pragmatic, psychological, and epistemic in nature. Examples of practices that may lead to doxastic irrationality include the overconfidence bias, biases about one’s own and other social group(s), and optimistically biased beliefs about the self. 

For instance, Stammers argues that at the level of the individual intergroup biases may have some negative epistemic and practical outcomes, but can also deliver some indirect psychological benefits through raising self-esteem and enhancing psychological functioning. The intergroup bias is when we have a systematic tendency to favour members of groups to which we belong over members of groups to which we do not belong.


At the group level, the picture is more complex. The intergroup bias has a number of obvious societal costs, given that disfavouring people and denying their access to certain social goods on the basis of their social identity clearly violates fairness and equality. Intergroup biases can also damage social cohesion and cooperation and exacerbate intergroup conflict which can be detrimental to a well-functioning society. Further, because the intergroup bias tends to be greater in already dominant social groups with greater access to societal resources, it may exacerbate the unfair distribution of resources among non-dominant groups.

However, various studies demonstrate that intergroup bias against a dominant group mediates a non-dominant group’s awareness of discrimination and disadvantage. To the extent that attitudes negatively biased against a dominant group enable a marginalized group to recognise how discrimination acts against their interests, intergroup biases are advantageous. For instance, a disadvantaged group that fosters negative perceptions of an advantaged group predicts how the advantaged group will distribute a reward to them with greater accuracy than a disadvantaged group that fosters positive out-group perceptions.

An important lesson emerging from our discussion is that any alleged benefits at the individual level represent only part of the story when trying to address the effects of doxastic irrationality. It is possible that benefits at the individual level also imply costs at the group level. This means that as long as discussions over costs and benefits of doxastic irrationality are committed to methodological individualism, their conclusions should be adequately qualified.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph