Wednesday, 31 July 2013

Responsibility for Implicit Bias

Natalia Washington
I am a graduate student in the Philosophy department at Purdue University. My research interests lie at the intersection of philosophy of mind, cognitive science, moral psychology, and scientific psychiatry—and especially in externalist viewpoints on these subjects.

In a forthcoming paper with Dan Kelly, we defend a kind of social externalism about moral responsibility in the case of implicit bias, a particular kind of “imperfect cognition.” For those who aren’t familiar, implicit biases are unconscious and automatic negative evaluative tendencies about people based on their membership in a stigmatized social group—for example, on gender, sexual orientation, race, age, or weight. Because implicit biases operate without our conscious awareness, one might worry about the prospects for holding individuals responsible for their behaviors when they are influenced by biases, as mounting evidence suggests.

Our work addresses this challenge, and applies philosophical theories of moral responsibility to behaviors influenced by implicit bias. The driving question: whether anything about the nature or operation of implicit biases themselves guarantees that behaviors influenced by implicit bias should inevitably be excused. Our answer is no. We argue that there are clear cut cases where an individual can be held responsible for behaviors influenced by biases she does not know she has, and which she would disavow were she made aware.

The key idea is that an individual’s epistemic environment bears on her level of responsibility. By way of analogy, a student who does not know her exam date is held responsible for the contents of the class syllabus. A doctor is responsible for the changing state of medical know-how and know-that. Neither individual is in a position to claim ignorance.

Thus consider the difference between a hiring committee member who, under the influence of a bias she does not know she harbors, unfairly evaluates a stack of CVs in 1988—before we knew much of anything about implicit bias—and a hiring committee member who does the same in 2013. They may each claim that they did not know they were making biased evaluations, and should therefore be excused. But our committee member from the present day is not in a position to make recourse to this excuse.

Like her counterpart from the past, the hiring committee member from 2013 did not know that she evaluated the CVs unfairly, but she ought to have known. We now know much more about these things, and the relevant knowledge is out there in a way that it wasn’t back in 1988. As someone in charge of a hiring process in a time and place where it is known that the perceived gender or race of an applicant has a biasing effect, it was her responsibility to do something to mitigate that effect—for example by evaluating the CVs blindly.

Knowledge can affect moral responsibility.  One upshot of this argument is that increases in knowledge can raise the standards of what we can be held responsible for. Raising our level of responsibility can also gain us freedom from the unwanted effects of our imperfect cognitions. Thus, another upshot worth investigating—especially as egalitarians—is that being influenced by implicit bias is not inevitable. For more on overcoming bias see recent work by Alex Madva and Michael Brownstein.


  1. Interesting and highly policy relevant! Your thinking in this area has relevance for work I am currently doing with respect to the ethical/philosophical underpinnings of hate crime policy and law (see here: and here: ). But here are two (clusters of) queries.

    First, do you limit yourself to constructing the responsibility for an action flowing out of unconscious (though in principle knowable) social bias as a sort of negligence? And how do you picture that limitation to influence how someone could be properly held responsible (in a legal context, for instance)? Normally, given the nature of the behaviour, negligence would imply much less culpability and be more easily excused than, e.g., knowledge. Is this how you view the jurisprudential implications?

    Second, how do you view the relationship between the moral responsibility you talk about as being ascribable to someone and various forms of holding people responsible. Would you, for instance OK an idea where P is responsible for act A according to the description here, but where in the legal context, his/her culpability is assessed as much stronger (implying, e.g., a much harsher sentence than negligence culpability would otherwise have implied in a typical retributivist framework)?

    1. Thanks for your interest, Christian! There is probably much that can be said about the interaction between policy and this view. Unfortunately, I don't know very much about hate crime law or jurisprudence. Can you perhaps give an example of the typical retributivist framework and how you're using the terms "negligence" and "culpability" in this context?

  2. So, a typical example would be how most legal systems distinguish with regard to culbability ascription (that is, how much of a legal wrongdoer - in the sense relevant for sentencing - a perpetrator is) between manslaughter (= knowledge) and involuntary manslaughter (negligence), where the offender as a rule is held less responsible in the latter case (that is, ceteris paribus, given a less harsh sentence). In hate crime laws, bias is introduced as a reason to aggravate sentences for independently defined crimes. In some such cases, offenders themselves are not aware of the bias, but it becomes visible during the police and prosecuter investigations. The idea with this has normally been that the presence of such bias linked to the crime can serve as a sentencing enhancing reason just as, e.g., brutality or inconsideration can (that is, motivating substantial sentencing enhancement). But such reasoning becomes less convincing if the role of the bias with regard to culpability is the weakest there is (i.e. negligence).... In parallel, discrimination crimes of the sort you described would in that context become rather petty, compared to if you apply, e.g., a recklessness analysis of what's going on, or apply strict liability (buth that's of course controversial in a retributivist framework).

    1. At first blush it seems to me like the difference between manslaughter and involuntary manslaughter is analogous to the difference between an explicitly racist behavior and an implicit one. So for example, a hiring committee member who knowingly and purposefully chooses all male candidates with white-sounding names from a stack of diverse and equally qualified CVs, is to my mind much more responsible than a committee member whose same choice is unknowingly influenced by implicit bias.

      The question is whether we want to consider this difference (between the explicit and implicit racist) as a case of knowing versus negligent wrongdoing, or if we are more interested in the difference between two implicitly biased individuals: one from 1988, and one from 2013. Dan and I argue that the committee member from 2013 bears a larger part of the blame, but I’m not sure how we should evaluate them from a standpoint of legal culpability. Are both negligent, for instance? Or, is there any sense to be made of saying that one was more negligent than the other?


Comments are moderated.