Skip to main content

What does debiasing tell us about implicit bias?

Nick Byrd is a PhD candidate and Fellow at Florida State University, working in the Moral & Social Processing (a.k.a., Paul Conway) Lab in the Department of Psychology, and in the Experimental Philosophy Research Group in the Department of Philosophy at Florida State University. In this post, he introduces his paper “What we can (and can’t) infer about implicit bias from debiasing experiments”, recently published in Synthese.


Implicit bias is often described as associative, unconscious, and involuntary. However, philosophers of mind have started challenging these claims. Some of their reasons have to do with debiasing experiments. The idea is that if debiasing is not entirely involuntary and unconscious, then implicit bias is not entirely involuntary and unconscious.

Sure enough, some evidence suggests that debiasing is not entirely involuntary and unconscious (e.g., Devine, Forscher, Austin, & Cox, 2012). So it seems that implicit bias can be conscious and voluntary after all—i.e., it can be reflective.

Now, why would philosophers think that debiasing is not associative? I worry that this non-associationism rests on a couple mistakes.

First, there is a philosophical mistake; it’s what I call the any-only mixup (Section 0 of the paper): the mistake of concluding that a phenomena is not predicated on any instances of a particular process when the evidence merely shows that the phenomena is not predicated on only instances of that particular process.

The second mistake is more empirical. It is the mistake of overestimating evidence. As you may know, the open science movement has been reshaping psychological science for years. Part of this movement aims to improve the power of its studies to find truly positive results by, among other things, increasing the sample size of experiments and taking statistical significance more seriously.


When I reviewed the philosophical literature about implicit bias, I sometimes found philosophers appealing to small sample sizes and marginally significant effects. Further, when I considered only strong evidence, I did not find evidence that uniquely supported a non-associationist view of implicit bias.

If we want to adopt a view of implicit bias that does not rest on these mistakes, then I suggest some rules of thumb. To avoid the any-only mixup, I recommend that we employ something like the following principle (Byrd, 2019, Section 2.2):

Negative Intervention Principle. S is not predicated on P-type processing just in case both P-type manipulations or measurements and non-P-type manipulations or measurements are employed and, empirically, only non-P-type processes cause a change in S. 

That’s the technical recommendation. The less technical recommendation is to visualize the various interpretations of any given debiasing result—e.g., Figure 3, from Byrd, 2019.


When it comes to paying more attention to statistical descriptions, I relay some rules of thumb recommended by methodological reformers in psychology. (Byrd, 2019, Section 3.3)

A common rule of thumb for sufficient statistical power is to have a minimum of about 50 participants, per experimental condition (Simmons, Nelson, & Simonsohn, 2013, 2018)…. In lieu of a proper power analysis, some researchers recommend estimating power as follows: p = .05 → power ≈ .5; p = .01 → power ≈ .75; p = .005 → power ≈ .8; and p = .001 → power > .9 (Greenwald, Gonzalez, Harris, & Guthrie, 1996). [However], recent replication attempts suggest that if this estimation errs, it errs on the side of overestimation (Camerer et al., 2018; Open Science Collaboration, 2015). 

Ultimately, the nature of implicit bias is an empirical question. So our view of implicit bias may change with new and better evidence. In the meantime, I hope that we can agree to adopt better inference rules and standards of evidence.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph