Nick Byrd is a PhD candidate and Fellow at Florida State University, working in the Moral & Social Processing (a.k.a., Paul Conway) Lab in the Department of Psychology, and in the Experimental Philosophy Research Group in the Department of Philosophy at Florida State University. In this post, he introduces his paper “What we can (and can’t) infer about implicit bias from debiasing experiments”, recently published in Synthese.
Implicit bias is often described as associative, unconscious, and involuntary. However, philosophers of mind have started challenging these claims. Some of their reasons have to do with debiasing experiments. The idea is that if debiasing is not entirely involuntary and unconscious, then implicit bias is not entirely involuntary and unconscious.
Sure enough, some evidence suggests that debiasing is not entirely involuntary and unconscious (e.g., Devine, Forscher, Austin, & Cox, 2012). So it seems that implicit bias can be conscious and voluntary after all—i.e., it can be reflective.
Now, why would philosophers think that debiasing is not associative? I worry that this non-associationism rests on a couple mistakes.
First, there is a philosophical mistake; it’s what I call the any-only mixup (Section 0 of the paper): the mistake of concluding that a phenomena is not predicated on any instances of a particular process when the evidence merely shows that the phenomena is not predicated on only instances of that particular process.
The second mistake is more empirical. It is the mistake of overestimating evidence. As you may know, the open science movement has been reshaping psychological science for years. Part of this movement aims to improve the power of its studies to find truly positive results by, among other things, increasing the sample size of experiments and taking statistical significance more seriously.
When I reviewed the philosophical literature about implicit bias, I sometimes found philosophers appealing to small sample sizes and marginally significant effects. Further, when I considered only strong evidence, I did not find evidence that uniquely supported a non-associationist view of implicit bias.
If we want to adopt a view of implicit bias that does not rest on these mistakes, then I suggest some rules of thumb. To avoid the any-only mixup, I recommend that we employ something like the following principle (Byrd, 2019, Section 2.2):
Negative Intervention Principle. S is not predicated on P-type processing just in case both P-type manipulations or measurements and non-P-type manipulations or measurements are employed and, empirically, only non-P-type processes cause a change in S.
That’s the technical recommendation. The less technical recommendation is to visualize the various interpretations of any given debiasing result—e.g., Figure 3, from Byrd, 2019.
When it comes to paying more attention to statistical descriptions, I relay some rules of thumb recommended by methodological reformers in psychology. (Byrd, 2019, Section 3.3)
A common rule of thumb for sufficient statistical power is to have a minimum of about 50 participants, per experimental condition (Simmons, Nelson, & Simonsohn, 2013, 2018)…. In lieu of a proper power analysis, some researchers recommend estimating power as follows: p = .05 → power ≈ .5; p = .01 → power ≈ .75; p = .005 → power ≈ .8; and p = .001 → power > .9 (Greenwald, Gonzalez, Harris, & Guthrie, 1996). [However], recent replication attempts suggest that if this estimation errs, it errs on the side of overestimation (Camerer et al., 2018; Open Science Collaboration, 2015).
Ultimately, the nature of implicit bias is an empirical question. So our view of implicit bias may change with new and better evidence. In the meantime, I hope that we can agree to adopt better inference rules and standards of evidence.