Skip to main content

Responsibility for Implicit Bias

Natalia Washington
I am a graduate student in the Philosophy department at Purdue University. My research interests lie at the intersection of philosophy of mind, cognitive science, moral psychology, and scientific psychiatry—and especially in externalist viewpoints on these subjects.

In a forthcoming paper with Dan Kelly, we defend a kind of social externalism about moral responsibility in the case of implicit bias, a particular kind of “imperfect cognition.” For those who aren’t familiar, implicit biases are unconscious and automatic negative evaluative tendencies about people based on their membership in a stigmatized social group—for example, on gender, sexual orientation, race, age, or weight. Because implicit biases operate without our conscious awareness, one might worry about the prospects for holding individuals responsible for their behaviors when they are influenced by biases, as mounting evidence suggests.

Our work addresses this challenge, and applies philosophical theories of moral responsibility to behaviors influenced by implicit bias. The driving question: whether anything about the nature or operation of implicit biases themselves guarantees that behaviors influenced by implicit bias should inevitably be excused. Our answer is no. We argue that there are clear cut cases where an individual can be held responsible for behaviors influenced by biases she does not know she has, and which she would disavow were she made aware.

The key idea is that an individual’s epistemic environment bears on her level of responsibility. By way of analogy, a student who does not know her exam date is held responsible for the contents of the class syllabus. A doctor is responsible for the changing state of medical know-how and know-that. Neither individual is in a position to claim ignorance.

Thus consider the difference between a hiring committee member who, under the influence of a bias she does not know she harbors, unfairly evaluates a stack of CVs in 1988—before we knew much of anything about implicit bias—and a hiring committee member who does the same in 2013. They may each claim that they did not know they were making biased evaluations, and should therefore be excused. But our committee member from the present day is not in a position to make recourse to this excuse.

Like her counterpart from the past, the hiring committee member from 2013 did not know that she evaluated the CVs unfairly, but she ought to have known. We now know much more about these things, and the relevant knowledge is out there in a way that it wasn’t back in 1988. As someone in charge of a hiring process in a time and place where it is known that the perceived gender or race of an applicant has a biasing effect, it was her responsibility to do something to mitigate that effect—for example by evaluating the CVs blindly.

Knowledge can affect moral responsibility.  One upshot of this argument is that increases in knowledge can raise the standards of what we can be held responsible for. Raising our level of responsibility can also gain us freedom from the unwanted effects of our imperfect cognitions. Thus, another upshot worth investigating—especially as egalitarians—is that being influenced by implicit bias is not inevitable. For more on overcoming bias see recent work by Alex Madva and Michael Brownstein.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph