Skip to main content

Defining Agency after Implicit Bias


My name is Naomi Alderson. In 2014, I graduated from Cardiff University and was accepted onto a programme, led by Dr Jonathan Webber, that helped graduates to turn an undergraduate essay on the topic of implicit bias into a publishable paper. My paper, ‘Defining agency after implicit bias’, was published in March 2017 in Philosophical Psychology, and will be summarised here. In writing it, I found more questions concerning agency, cognition and behaviour than I was able to answer; I am going to continue my studies at UCL this September in the hope of getting closer to the truth.

Implicit biases are associations that affect the way we behave in ways that can be difficult to perceive or control. One example is so-called ‘weapon bias’, studied by Keith Payne (2006) among others. Payne showed participants images of gun-shaped objects asked them to make split-second decisions about whether they were guns or not.

He found that many participants were more likely to misidentify harmless objects as guns and to correctly identify guns more quickly if they were shown a picture of a black man’s face than if they were shown a white man’s face, due to an implicit association between black men and guns.

This bias was found even in people with no explicit racial bias and, moreover, was not directly controllable by reflective, deliberative effort: simply concentrating on not being biased was not enough to eliminate its effect.

The resistance of some implicit biases to deliberative control poses a threat to traditional reflectivist accounts of agency, whereby being an agent means being able to deliberatively choose an action and then act it out. If we cannot simply choose to be unbiased, then our implicit biases limit our ability to be agents on this account.

My paper aims to defend and update the reflectivist account of agency by outlining what kinds of control we do have over implicit biases and commenting upon what these forms of control suggest about the nature of agency itself.
One form of control we have over implicit biases is ecological control, a term introduced by Jules Holroyd and Daniel Kelly (2016). Ecological control is the process whereby agents employ external or internal ‘props’ in order to inhibit the effects of unwanted biases.

By changing our external environment by, for example, displaying counter-stereotypical images, or by repeating implementation intentions (internal mantras to behave in counter-stereotypical ways), we are able to develop automatic cognitions that inhibit the effect of unwanted biases.

Holroyd and Kelly also describe a different kind of control: automatic processes they call ‘props unconsciously employed’ to inhibit bias. They cite studies by Moskowitz & Li (2011) that suggest that striving to achieve egalitarian goals over time is one way to develop automatic processes that inhibit the effect of inegalitarian biases without consciously choosing to inhibit them.

I call this ‘goal control’ and consider it to be part of our reflective agency, making the description ‘unconsciously employed’ a little misleading; while the intention to control a bias may never be a conscious one, the goal that gives rise to the inhibiting automatic processes is.

I argue that this constitutes a kind of long-range reflective control – perhaps an especially useful one, because it does not require the agent to learn about a bias and form the specific intention to control it.

What emerges when these forms of control are considered is that each of them rely on automatic cognitions to achieve reflectively endorsed goals. This suggests a way to redefine agency while maintaining the reflectivist approach.

Instead of defining agency as the process of deliberatively governing actions, agency should be understood as acting in a way that serves reflectively endorsed goals, whether the actions are reflectively, deliberatively governed or are governed by automatic cognition.

If automatic cognition can be sometimes be more useful for achieving the goals that we reflectively endorse, as shown by the examples of ecological and goal control, then we should not exclude automaticity from agency simply because it is automatic.

By the same token, we should not include deliberatively guided actions within agency if they serve goals that we have never reflectively endorsed, simply because they are immediately controllable and perceptible.

To be an agent is, on my account, to act upon a goal that you have made your own through reflective endorsement, whether you achieve it by deliberative or automatic cognition.

Happily, the process of education and interaction with others arguably provides most people sufficient opportunity to reflectively choose and endorse their goals, enabling most of us to be considered agents most of the time.

Only those growing up in extremely isolated and unusual circumstances might be considered as being denied the resources needed to reflectively choose and endorse their goals. This perhaps explains our sense that those who grow up in extremely unusual circumstances may, in some cases, be less responsible for immoral behaviour as a result.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...