Skip to main content

Should Technology Erase Biases?

Today we continue our mini series exploring issues regarding technological enhancement in learning and education, featuring papers from the Cheating Education” special issue of Educational Theory. This week, Sophie Stammers discusses her paper “Improving knowledge acquisition and dissemination through technological interventions on cognitive biases”.


When we think about the role that technology could play in enhancing cognition, much of the literature focuses on extending faculties that are already performing well, so that they perform even better. We also know that humans possess a range of cognitive biases which produce systematically distorted cognitions. Could we use technology to erase our cognitive biases? Should we?

In this paper I wanted to think about the specific threats that cognitive biases pose to learning and education, and focused on two commonly recognised types of cognitive bias in particular: 


1. Confirmation bias, where people are more likely to accept information that conforms with their existing beliefs than that which contradicts them.

For example, political beliefs shape the interpretation of scientific research: people who are politically conservative are significantly more likely to believe that anthropogenic climate change is not happening (McCright and Dunlap 2011), an effect that likely extends to educators and researchers (Carlton et al. 2015).

That our beliefs and values mediate the information we find persuasive constitutes a considerable risk to education. Consider, for example, omitting meaningful study of colonialism from UK history curricula because it does not cohere with one’s notion of British values.

2. Social bias, where people make unfavourable judgments about someone and/or their accomplishments on the basis of their social identity.

This can cause us to discount the scholarly contributions of people who do not fit the social stereotype associated with their field of knowledge, which could lead to distorted representations of academic progress in that field (e.g. in the evaluation of writing (Reeves 2014); teacher’s expectations of pupils (van den Bergh et al 2010); and access to higher education (Milkman et al. 2015)).

One might think that these sorts of biases are good candidates for erasure through technological interventions because they can be difficult to recognise (although they are not necessarily thereby unconscious, see Hahn et al. 2014) and controlling them requires continued effort and commitment (e.g. Holroyd and Kelly 2016).

I took the opportunity to imagine a future neuroscience – I follow others in the enhancement literature in thinking it's not too soon to consider the possibility and permissibility of such technology in advance (but see section 1 for an overview of developments that might lead to the kinds of capabilities I'm assuming here). Let's distinguish two ways in which we might use technology to intervene on cognitive biases. 

In the first, we halt the processes that produce distorted representations. In the paper, I suggest that this method is unhelpful, as it would immobilise a significant part of what the system does well – these processes have epistemic benefits and help us navigate large amounts of information.

The second kind of intervention targets particular associations in memory (if you’re familiar with it, imagine the machine used in Eternal Sunshine of the Spotless Mind which deletes specific memories). Kathy Puddifoot argues for the epistemic benefits of (effectively) immobilising our associations between, for example “men” and “science” due to their tendency to feature in so much processing that results in further distorted cognition down the line (2017). Could technology facilitate this process?

One might think that a futuristic device as in Eternal Sunshine used to selectively delete biases deals with the problems of otherwise requiring continued effort and commitment.

But, as I discuss in more depth in the paper, it also risks missing an important educational opportunity that comes for free with more mundane, effortful methods. Particularly in the case of social bias, the relevant social stereotypes are connected to deep and pervasive structural inequalities (Haslanger 2015).

If our technological intervention allows users to delete their biases without acknowledging their content, their source, or their part in perpetuating structural injustices, then it takes away an important opportunity to engage learners and educators with the aim of facilitating their recognition of the structures that constrain the trajectories of knowledge acquisition. We perhaps then should only use such technology in conjunction with the opportunity to engage with the wider social and historical context of bias. It might even increase motivation and lead to more stable outcomes over time.

Drawing on Srinivasan (2015), I say a bit more in the paper about why this isn’t just a moral aim in terms of who gets to be heard and participate in education and learning, but one that will ultimately improve knowledge acquisition: scholars who have homogenous social and cultural experiences may advance their discipline in fewer directions than a more heterogeneous workforce.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph