Jules Holroyd summarises her paper 'Implicit Bias, Awareness and Imperfect Cognitions'.
Implicitly biased actions are those that manifest the distorting influence of implicit associations. For example, a member of a hiring committee might demonstrate implicit bias in undervaluing the research history of an applicant because of negative implicit associations with her gender or race. Implicit associations are typically characterised as operating automatically, fast, beyond the reach of direct control. Sometimes they are also characterised as unconscious. This last thought - that they are associations of which we are not aware - is a premise used in arguments for the conclusion that individuals cannot be responsible for the extent to which their actions are implicitly biased. How could that margin of discrimination be something you are responsible for, if you were unaware of the association producing it, or that your behaviour was discriminatory at all?
This is a tempting thought (and one articulated, in various guises, in Jennifer Saul 2013 and Natalia Washington & Dan Kelly 2013), but in this paper I argue against it. It is not at all obvious that ignorance exculpates. What matters is not what an individual is or is not aware of - does or does not know - so much as what she should be aware of. So, if we want to evaluate whether an individual is exculpated from responsibility, it does not suffice to show that she lacks awareness of some relevant facts (such as that she is implicitly biased, or that her action is discriminatory). It may be that she lacks awareness of something that she should be aware of. And that state of ignorance might itself be culpable. Accordingly, we need to isolate the relevant normative condition (of what should an individual be aware), and ask then two questions: i) can individuals have this sort of awareness?; ii) when they lack it, is this lack culpable? This latter question asks us to explore why individuals may lack awareness of implicitly biased actions, and evaluate whether this lack is itself innocent or culpable.
In the context of implicit bias, things are complicated by the fact that there are various senses of awareness at issue. Sometimes authors seem to be concerned with introspective awareness of the implicit association (or its activation) itself (Saul, 2013). Other times, authors seem to be concerned with what I call 'inferential awareness' of implicit bias: awareness of the body of knowledge about implicit bias, from which we might make inferences about the likelihood of our own actions being biased (Washington & Kelly, 2013 . Finally, we might be concerned with what I call 'observational awareness' of the morally relevant features of one's actions (that they are subtly discriminatory, say).
This generates three candidate normative conditions, roughly:
i) agents should have awareness of their implicit associations
ii) agents should have inferential awareness that their actions might be implicitly biased
iii) agents should have observational awareness that their actions have the property of being subtly discriminatory.
I argue that we should not endorse the first two conditions. First, there is no good reason to demand that individuals have introspective awareness of their implicit associations. It is not clear that this is possible; but in any case, it is not a demand we make of each other in other contexts. Agents can be responsible despite lacking introspective awareness of various sorts (about the sources of their actions, their ulterior motives, the processes influencing their decisions). Second, the requirement that individuals have inferential awareness is rather too demanding.
Washington & Kelly suppose that the requirement to engage with the findings of empirical psychology, in order to gain this sort of inferential awareness, will apply only to those with certain role responsibilities (hiring committee members, say), because of the impact of bias in those in such 'gatekeeper' roles. This restriction seems to me implausible: many people make decisions about who to hire, who to fire – literal ‘gatekeeper’ decisions – but also about who to grant a loan to, where to live, who to stop and search, who to give a lift to, what news stories to report (and how), who to write prescriptions for, who to sit by on a train, how to evaluate co-workers, who to smile at, what grades to assign or references to write, who to cross the road to avoid, who to believe, who to befriend . . . and so on. Small effects of implicit bias in such scenarios can have significant and cumulative impact. So the requirement should surely be extended more broadly than Washington & Kelly suppose. Yes, it is implausibly demanding to require that everyone whose actions might be implicitly biased must engage with the findings of empirical psychology.
Instead, I endorse the third condition: that individuals should have observational awareness of the morally relevant features of their actions, such as that they are subtly discriminatory. This is the sort of demand we usually make of each other. Moreover, I canvass empirical evidence which indicates that individuals can have this kind of awareness with respect to implicitly biased action; so it is not unreasonable to demand it.
However, it seems that we frequently do not have this kind of awareness. When individuals lack such awareness, is this failing culpable? That will depend on the cause of the failing: I suggest it may be due to over-confidence in our objectivity and impartiality, some failure of attentiveness, or perhaps motivated self-deception - of course we don't like to think of ourselves as being biased! When these explanations are in play, though, our lack of awareness will be culpable. So not knowing what we should know won't excuse us from responsibility for implicitly biased actions.