Skip to main content

Bias, Structure and Injustice


Today's post is provided by Robin Zheng. In this post she introduces her paper "Bias, Structure and Injustice: A reply to Haslangar", published in Feminist Philosophical Quarterly. Robin Zheng is an Assistant Professor of Philosophy at Yale-NUS College. Her research focuses on issues of moral responsibility and structural injustice, along with other topics in ethics, moral psychology, feminist and social philosophy, and philosophy of race. 

Some of her other works on topics related to this post include “Attributability, Accountability, and Implicit Bias” in Implicit Bias and Philosophy: Volume 2 (eds. Michael Brownstein and Jennifer Saul), “A Job for Philosophers: Causality, Responsibility, and Explaining Social Inequality” in Dialogue: Canadian Philosophical Review, and “What is My Role in Changing the System? A New Model of Responsibility for Structural Injustice” (forthcoming in Ethical Theory and Moral Practice). For more information, visit her website here.


In her 2015 “Social Structure, Narrative, and Explanation,” Sally Haslanger raises a highly influential critique of the philosophical literature on implicit bias. Her target is a kind of explanatory and normative individualism: she argues that implicit bias is neither necessary nor sufficient for explaining ongoing injustice; and that it wrongly locates the badness of injustice in individuals rather than social structures.

She demonstrates this with several examples in which inequalities arise even when it is stipulated that no one has racist, sexist, or inegalitarian attitudes of any kind: a husband and wife whose decision to make her the primary caregiver (because only she receives parental leave) results in unequal incomes, a teacher whose fair disciplining of a Black student leads him and his non-White friends (because they have experienced long patterns of racism) to disengage and perform badly in her classroom, and a worker who loses his job due to the city’s cancelling (because they are strapped for cash) his bus route to work.


One of things I do in my paper is defend a certain kind of normative individualism by arguing that we need a theory of individual responsibility in order to hold particular persons accountable for the day-to-day work of collectively organizing to transform social structures. For example, if a political ally of mine makes an implicitly biased remark, I face the immediate problem of how to feel and respond to that particular person – should I call them out? Should I modify my beliefs or attitudes toward them? Should I continue working with them? A theory of individual responsibility can offer me some guidance in answering these live, practical questions in ways that a structural theory might not.

In the paper I also propose a theory of implicit bias that draws on Pierre Bourdieu’s work on social structures. My suggestion is that we can understand implicit biases to be themselves a type of social structure. The two key Bourdieusian concepts here are field and habitus. Bourdieu conceives of social structures as “fields,” that is, as configurations of relationships between social positions.

Agents occupy different positions in the field according to how much and what sorts of capital (i.e., social, material, and cultural resources) they possess. Over time, an agent will acquire a “habitus” from the field, which Bourdieu describes as “schemes of perception, thought and action [that] tend to guarantee the ‘correctness’ of practices and their constancy over time.” There is a kind of mutually reinforcing fit between field and habitus; and while habitus is “deposited” in agents by a field, that field persists only insofar as people remain invested in acting in accordance with its rules. According to Bourdieu:

Social reality exists, so to speak, twice, in things and in minds, in fields and in habitus, outside and inside of agents. And when habitus encounters a social world of which it is the product, it is like a “fish in water”: it does not feel the weight of the water, and it takes the world about itself for granted. . . . It is because this world has produced me, because it has produced the categories of thought that I apply to it, that it appears to me as self-evident.

This seems to me like a wonderfully apt description of implicit social cognition. In other words, implicit biases are not ordinary “attitudes.” They are the thing in our heads that “fits” us in to social structures – that lock us into forms of behavior which sustain those structures over time, because they are themselves a species of micro-level social structure that interlocks with the macro-level field.

As Omar Lizardo puts it:

The habitus is itself an objective structure albeit one located at a different ontological level and subject to different laws of functioning than the more traditional ‘structure’ represented by the field.

This is what it means for social reality to exist “twice.”

The upshot, I argue, is that part of the work of structural transformation begins from the inside out, with the construction of new habituses (i.e. new social structures) that serve to challenge existing injustice. Those of us committed to working together in the long run for a radical transformation will need practices of self-reflexive criticism and constant inspection – our “habit-busting habits” – to become second nature.

That is, we will need to develop “radical habitus” or “habitus of resistance” (Clarke 2000) alongside anti-oppressive fields that cultivate them. Becoming aware of our own implicit biases and putting measures in place to block them, I believe, is an important part of this process.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph