Skip to main content

Implicit Bias and Epistemic Innocence

In this post I will suggest some reasons for thinking that at least some beliefs based on implicit bias are epistemically innocent. An implicit bias is a bias ‘of which we are not aware […] and can clash with our professed beliefs about members of social groups’, and which can ‘affect our judgments and decisions’ (Crouch 2012: 7). Empirical work has shown that such biases are held by ‘most people’, even those people who avow egalitarian positions, or are members of the targeted group (Steinpreis et al. 1999). 

As Lisa and I have said in previous posts, we understand the epistemic innocence of a cognition as that cognition's meeting two conditions. Here are the conditions a belief based on implicit bias would have to meet in order to be epistemically innocent:

1. Epistemic Benefit: The belief delivers some significant epistemic benefit to an agent at a time (e.g., it contributes to the acquisition, retention or good use of true beliefs of importance to that agent).

2. No Relevant Alternatives. Alternative beliefs that would deliver the same epistemic benefit are unavailable to the agent at that time.

Let us look first to the No Relevant Alternatives condition. My thought is that if my belief that p is guided by, or grounded on, my implicit bias pertaining to some group which my belief is about, there is a sense in which alternative beliefs are unavailable. This is because my bias might incline me to selectively attend to evidence, and to generally form beliefs in ways which are not truth-regulative as they would be without the bias. Further, because my bias is implicit, because it is not something of which I am aware, I cannot correct for it (though note that it does not follow that I could correct for it even if I become aware of it, so alternative cognitions may still be unavailable).

With respect to unawareness, Jennifer Saul claims that the literature supports the view that we are unaware of the implicit biases we display (Saul 2013: 43). Similarly, Jules Holroyd claims that our harbouring implicit biases is not something that we can know by ‘introspection, reflection, or self-report on one’s motives’ (Holroyd 2012: 275). However, Holroyd resists the claim that implicit biases may be solely present because of cultural factors, suggesting instead that ‘the extent to which we manifest biases may rather be a function of other cognitive states we have, and over which we plausibly have control’ (Holroyd 2012: 280). Whatever is right about the etiology of implicit biases, that is, whether they are due solely to our surroundings, or whether they are due also in part to other attitudes we have, I do not think this bears on their meeting the No Relevant Alternatives condition on epistemic innocence.

Let us move to the Epistemic Benefit condition. This condition is less obviously met by beliefs based on implicit bias, as they are not appropriately sensitive to evidence, and are often false. Though beliefs based on implicit biases may, at the very least, have indirect epistemic benefits. We might usefully think of implicit biases as schemas, which are 'mental frameworks of beliefs, feelings, and assumptions about people, groups, objects' which can 'help us made sense of the world' (Anderson 2010: 10-11). Further, '[t]hese schemas filter information, helping us to determine what should be paid attention to and what can be disregarded. They save us time' (Crouch 2012: 7).

Having the belief that a set of CVs is less good than a comparable (perhaps identical!) set of CVs (see Steinpreis et al. 1999) might be epistemically beneficial insofar as it makes decision making processes quicker. This is in line with Crouch’s claim that 

'[P]eople who are pressed for time and cognitively overloaded tend to rely on their schemas or stereotypes more automatically. People higher up in hierarchies tend to be people who juggle a lot of tasks and information, and to be pressed for time, and so are not likely to take the time to consider decisions in ways that might avoid their pre-existing schemas' (Crouch 2012: 8).

So it looks like at least some beliefs based on implicit bias might be epistemically innocent, in virtue of alternatives being in some sense unavailable, and their having at least indirect epistemic benefits. In my next post I'll write about the implications of this result for responsibility for implicit bias and how we ought to tackle it.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph