Skip to main content

Bias and Blame: Interview with Jules Holroyd




In this post, I interview Imperfect Cognitions network member Jules Holroyd, Vice-Chancellor’s Fellow in the department of philosophy at the University of Sheffield, and Principal Investigator of the Leverhulme Trust funded Bias and Blame project. The project runs from 2014-2017 and the team includes senior lecturer Tom Stafford and postdoctoral researcher Robin Scaife in the department of psychology, and PhD student Andreas Bunge in the department of philosophy.

SS: The Bias and Blame project investigates the relationship between moral interactions, such as blame, and the manifestation of implicit bias. How did you become interested in this topic, and has there been much previous research in this area?

JH: The project looks principally at whether moral interactions, such as blaming, impact on the expression of implicit racial bias. The interest in this  question arose out of the philosophical debates about responsibility for bias, in which two claims seemed to be prominent: first, that individuals are not responsible for implicit bias (for having it, or for manifesting it). I disagreed with this claim, and have argued in various places (here, here and here) that it is not at all obvious that any general exculpating conditions hold in relation to our discriminatory behaviour that results from implicit bias.

Second, authors have claimed that irrespective of individuals’ responsibility, we should not blame individuals, since that would be counterproductive. It might provoke hostility and backlash, and make people less motivated to buy in to the project of tackling discrimination and attendant problems of under-representation. This sort of claim is found in some of Jenny Saul’s early work on implicit bias, and more recently in Manuel Vargas’s work (on his revisionist conception of responsibility in relation to implicit bias). I can see the appeal of this kind of claim, and the reasons for caution with our use of blame. But ultimately the impact of blame on implicit attitudes and individual motivation (explicit and implicit) is an empirical question. There hadn’t been a great deal of empirical research into this issue: some studies looked at the role of moral confrontations in combating the expression of bias (Czopp and Monteith, 2006). Others had examined the role of inducing guilt in blocking its expression (Moskowitz & Li 2011). These findings seemed to indicate that under certain conditions, moral interactions and the provoking of moral emotions could have positive effects on bias mitigation: not the sort of backlash that had been worried about.

Moreover, this kind of intervention – harnessing the resources of our moral interactions with each other – seemed promising in comparison with some of the more individualistic and mechanistic attempts to alter individual cognition (which have been notoriously difficult to replicate and sustain). But no one had yet looked at how blame might impact on implicit biases and their expression. It looked like the sort of question that we could construct an experimental design to test. And this is what we were able to do, with the funds from the Leverhulme Project Research Grant.

SS: One might assume that progress in empirical work on implicit bias is mostly within the purview of psychology, but your research utilises concepts from philosophical study to both inform empirical investigations and to interpret the results. This is obviously something that we’re interested in at PERFECT. In your opinion, what is the value of interdisciplinary work on implicit bias, and co-operation between philosophers and psychologists more generally?

JH: The interdisciplinary nature of our research has been crucial. What we are exploring is essentially an empirical question that arises out of philosophical debate. But the notions deployed in the experimental design – holding morally responsible, expressing blame – are concepts that have been philosophically honed, and it was important that their role in the experimental process adequately reflected the notions that philosophers have been working with (and worrying about).

At the same time, input was needed from the experimentalists on the project (Tom and Robin), since the framing of those notions in the experiment needed to be empirically operational: there is no point deploying concepts that are philosophically rigorous, but opaque or alien to the participants in the studies (who were not philosophers). Later in the process, when we had the data set from the studies, interpreting them and anticipating their significance for a range of philosophical debates, required both statistical analysis and conceptual work, so again, having philosophers and psychologists around the table was invaluable at that stage too.

We are fortunate in that over time, we’ve had various interactions (reading groups, feedback on each other’s work) that enabled us to come to common understandings of terminology and the angles we each approach things from. And we get on really well, so even where there are disagreements they are never (not to date, at least!) irresolvable!

The whole process, from conception of the research question, to experimental design, to interpretation of the findings, has been rigorously interdisciplinary. This has enabled us to do research that we simply could not have done otherwise! And, we’ve reached some preliminary conclusions that, we hope, make a valuable contribution to the philosophical debates…

SS: Results from your experiment on moral interactions and implicit bias are currently under review (perhaps you have a preferable way to phrase this!) – can you tell us a bit more about the study, what you discovered, and how these findings challenge assumptions often made in implicit bias training?

JH: The draft of the paper (and all the steps from the experimental process – design, data analysis and so on) is currently posted at the Open Science Framework here, so readers can take a look at the draft paper. The key findings in terms of implicit bias were surprising to me. I had predicted that we would see blame having an impact on the expression of implicit bias – that if we managed to induce guilt in participants, we’d see something like the impact found in the Moskowitz and Li study. But our findings didn’t confirm this prediction. Although implicit bias scores in the blame conditions were consistently lower than in various controls these differences were not all significant. This suggests that blame may not reduce implicit bias or if it does that the effect is small.

However, these results do rule out the possibility that blame (at least as we operationalized it) increases implicit bias by any non-trivial amount. Moreover, the blame condition had a big impact on people’s self-reported intentions to do something about implicit racial bias. This was long lasting, too: at a 6 month follow up, these intentions were still reported. This seems really important, since the impact is on people’s motivations to change. So, whilst we didn’t find that blame reduces implicit bias, we our findings do undermine the assumption made by philosophers that blame produces a backlash effect, and prevents buy in to the project of addressing racial bias.

One thing we consider in the paper is why these assumptions were not borne out. It may be that there are different paradigms of blame that people have in mind: telling people they are bad and wrong (which may indeed provoke hostility) versus reminding people they have violated a norm that they themselves rightly subscribe to (which seems to better capture what is going on with implicit bias). Looking at how blame is received and understood is something that seems ripe for future research (Malle has done some preliminary work on this, which looks really interesting). Likewise, investigating how different kinds of moral interaction impact on attitudes is a future question worth exploring (why blame if other responses get similar, or better, effects?). So, as with any good project – we got some interesting preliminary findings, and ended up with a host of new empirical and philosophical questions. We also got some interesting exploratory analyses relating to the participants’ awareness of their biases, and the relationship with other measures (such as intellectual humility). It’s all in the paper!

SS: It sounds like the findings have opened up some further avenues for research concerning moral interactions and implicit bias. Do you have plans for further investigations?

In fact, in future work we hope to take a slightly different direction. We’ve been organising (with Alex Madva and Erin Beeghly) a series of workshops on Bias in Context (see here and here for details!). These focus on getting a better understanding of the relationship between individual cognition and social context – both in terms of the causal contributors to discrimination and under-representation, and the remedies for addressing these problems. Whilst our Leverhulme project moved away from individualism by looking at how interpersonal interactions could shape our attitudes, we want to ‘socialise’ the research even more, by examining how interventions that target social meaning and social stereotypes directly, might impact on individuals attitudes (implicit and explicit).

This is another example of how the interaction between philosophy and psychology can be fruitful. Philosophy has brought critiques of implicit bias research and its individualistic focus to the table and with psychologists we can empirically investigate the impact of different social changes, and evaluate different interventions. This evidence also has a role in confirming (or not!) the predictions generated by different models of the mind concerning how it interacts with the social world. So we have some exciting collaborations coming up… but this interdisciplinary work is all funding dependent (it requires funds to run experiments, have a post-doc running them, etc). So watch this space for news of future work…

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph