Skip to main content

Implicit Bias, Awareness, and Imperfect Cognitions

This is the seventh in our series of posts on the papers published in a special issue of Consciousness and Cognition on the Costs and Benefits of Imperfect Cognitions. Here Jules Holroyd summarises her paper 'Implicit Bias, Awareness and Imperfect Cognitions'. 

Implicitly biased actions are those that manifest the distorting influence of implicit associations. For example, a member of a hiring committee might demonstrate implicit bias in undervaluing the research history of an applicant because of negative implicit associations with her gender or race. Implicit associations are typically characterised as operating automatically, fast, beyond the reach of direct control. Sometimes they are also characterised as unconscious. This last thought - that they are associations of which we are not aware - is a premise used in arguments for the conclusion that individuals cannot be responsible for the extent to which their actions are implicitly biased. How could that margin of discrimination be something you are responsible for, if you were unaware of the association producing it, or that your behaviour was discriminatory at all?

This is a tempting thought (and one articulated, in various guises, in Jennifer Saul 2013 and Natalia Washington & Dan Kelly 2013), but in this paper I argue against it. It is not at all obvious that ignorance exculpates. What matters is not what an individual is or is not aware of - does or does not know - so much as what she should be aware of. So, if we want to evaluate whether an individual is exculpated from responsibility, it does not suffice to show that she lacks awareness of some relevant facts (such as that she is implicitly biased, or that her action is discriminatory). It may be that she lacks awareness of something that she should be aware of. And that state of ignorance might itself be culpable. Accordingly, we need to isolate the relevant normative condition (of what should an individual be aware), and ask then two questions: i) can individuals have this sort of awareness?; ii) when they lack it, is this lack culpable? This latter question asks us to explore why individuals may lack awareness of implicitly biased actions, and evaluate whether this lack is itself innocent or culpable.


In the context of implicit bias, things are complicated by the fact that there are various senses of awareness at issue. Sometimes authors seem to be concerned with introspective awareness of the implicit association (or its activation) itself (Saul, 2013). Other times, authors seem to be concerned with what I call 'inferential awareness' of implicit bias: awareness of the body of knowledge about implicit bias, from which we might make inferences about the likelihood of our own actions being biased (Washington & Kelly, 2013 . Finally, we might be concerned with what I call 'observational awareness' of the morally relevant features of one's actions (that they are subtly discriminatory, say).

This generates three candidate normative conditions, roughly:

i) agents should have awareness of their implicit associations

ii) agents should have inferential awareness that their actions might be implicitly biased

iii) agents should have observational awareness that their actions have the property of being subtly discriminatory.

I argue that we should not endorse the first two conditions. First, there is no good reason to demand that individuals have introspective awareness of their implicit associations. It is not clear that this is possible; but in any case, it is not a demand we make of each other in other contexts. Agents can be responsible despite lacking introspective awareness of various sorts (about the sources of their actions, their ulterior motives, the processes influencing their decisions). Second, the requirement that individuals have inferential awareness is rather too demanding.

Washington & Kelly suppose that the requirement to engage with the findings of empirical psychology, in order to gain this sort of inferential awareness, will apply only to those with certain role responsibilities (hiring committee members, say), because of the impact of bias in those in such 'gatekeeper' roles. This restriction seems to me implausible: many people make decisions about who to hire, who to fire – literal ‘gatekeeper’ decisions – but also about who to grant a loan to, where to live, who to stop and search, who to give a lift to, what news stories to report (and how), who to write prescriptions for, who to sit by on a train, how to evaluate co-workers, who to smile at, what grades to assign or references to write, who to cross the road to avoid, who to believe, who to befriend . . . and so on. Small effects of implicit bias in such scenarios can have significant and cumulative impact. So the requirement should surely be extended more broadly than Washington & Kelly suppose. Yes, it is implausibly demanding to require that everyone whose actions might be implicitly biased must engage with the findings of empirical psychology.

Instead, I endorse the third condition: that individuals should have observational awareness of the morally relevant features of their actions, such as that they are subtly discriminatory. This is the sort of demand we usually make of each other. Moreover, I canvass empirical evidence which indicates that individuals can have this kind of awareness with respect to implicitly biased action; so it is not unreasonable to demand it.

However, it seems that we frequently do not have this kind of awareness. When individuals lack such awareness, is this failing culpable? That will depend on the cause of the failing: I suggest it may be due to over-confidence in our objectivity and impartiality, some failure of attentiveness, or perhaps motivated self-deception - of course we don't like to think of ourselves as being biased! When these explanations are in play, though, our lack of awareness will be culpable. So not knowing what we should know won't excuse us from responsibility for implicitly biased actions.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph