Skip to main content

Implicit Cognitions and Responsibility

Jules Holroyd
I am a lecturer in philosophy at the University of Nottingham. I've recently been working on implicit social cognitions in responsible agency - in particular, implicit biases. Implicit biases are, roughly, stored associations in memory, which can operate without the conscious awareness of the agent, and influence judgements and behaviours. We have many implicit associations and some of these enable us to navigate the world effectively. But others - those falling under the rubric of 'implicit bias' - seem deeply problematic and have a role in perpetuating discrimination and disadvantage (for a great resource on implicit bias, see here).

Implicit associations are discerned in experimental settings, whereby the differential speed of pairing items (such as black faces and white faces with positive vs negative terms) is taken to indicate the strength of associations held. Experimental evidence indicates that these implicit associations manifest in other, more worrying contexts: the differential evaluation of CVs with the same (identical!) qualifications but from different genders; the differential hiring recommendations for equally moderately qualified black and white job applicants; the greater readiness to identify as a gun an indeterminate object when in a black, rather than white, man's hand (for an overview, see Jost et al. 2009).
One general question is what the role of these implicit cognitions in individual agency is. Not all implicit cognitions are prima facie problematic in the ways that those described above are. Can they be epistemically innocent, or even valuable? How should we model agency given the role of implicit, as well as explicit processes in the production of action?
A specific question I have looked at in some detail (Holroyd 2012) concerns whether individuals should be held responsible for being influenced by morally problematic implicit biases. I identify a range of considerations that have been appealed to in support of the idea that individuals are not morally responsible for being influenced by such implicit associations:
i) that the agent is not causally responsible for the presence of such associations;
ii) that individuals lack the relevant kind of control over their operation;
iii) that individuals are unaware of their operation;
iv) that implicit biases are not responsive to reasons.
I have argued that with respect to each of these claims, there is reason to hold either that the condition posited as necessary for moral responsibility should not be accepted (i and ii), or that empirical evidence indicates that the condition is at least sometimes met (iii and iv).
For example, an argument such as the following, for exempting individuals from responsibility for being influenced by bias, might be offered (for versions of this argument see Saul, Levy 2012):
(i) Individuals cannot be held responsible for cognitive processes or influences on behaviour and judgment over which they do not have control.
(ii) Manifesting—being influenced in behavior and judgment by—implicit biases is not under an agent’s control.
(iii) Therefore, individuals cannot be held responsible for the influence of implicit biases on behavior and judgment.
One strategy for evaluating this argument is to consider exactly what sense of control is at issue (direct control? rational control? ability to inhibit or intervene? In a work in progress, Dan Kelly and I evaluate different versions of the control argument, and argue that ecological control can be sufficient for responsibility for implicit biases).
In my 2012 paper, I argue that it is plausible to suppose that indirect control is, at least sometimes, the relevant sense of control for responsibility, and that on this interpretation, premise ii will be false. There is some evidence that suggests that for some implicit associations (race and negative/positive word associations), the extent to which it influences action is correlated with the individual's explicit beliefs about the importance of non-prejudiced behaviour or goals to treat people fairly (see Devine et al 2002, Moskowitz & Li 2011). We might exert control over the manifestation of implicit biases indirectly, then, via change in our beliefs and goals. Insofar as individuals may have indirect control in this way, then it is not obvious that there are grounds related to lack of control for not holding individuals responsible for implicit biases.
In future work I aim to look in more detail at the kinds of control we might have over implicit cognitions, whether implicit cognitions differ from each other in important ways (in progress, with Joseph Sweetman) and the role of holding each other responsible in regulating the expression of implicit biases.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph