Today we continue our mini series exploring issues regarding technological enhancement in learning and education, featuring papers from the “Cheating Education” special issue of Educational Theory. This week, Sophie Stammers discusses her paper “Improving knowledge acquisition and dissemination through technological interventions on cognitive biases”.
When we think about the role that technology could play in enhancing cognition, much of the literature focuses on extending faculties that are already performing well, so that they perform even better. We also know that humans possess a range of cognitive biases which produce systematically distorted cognitions. Could we use technology to erase our cognitive biases? Should we?
In this paper I wanted to think about the specific threats that cognitive biases pose to learning and education, and focused on two commonly recognised types of cognitive bias in particular:
For example, political beliefs shape the interpretation of scientific research: people who are politically conservative are significantly more likely to believe that anthropogenic climate change is not happening (McCright and Dunlap 2011), an effect that likely extends to educators and researchers (Carlton et al. 2015).
That our beliefs and values mediate the information we find persuasive constitutes a considerable risk to education. Consider, for example, omitting meaningful study of colonialism from UK history curricula because it does not cohere with one’s notion of British values.
This can cause us to discount the scholarly contributions of people who do not fit the social stereotype associated with their field of knowledge, which could lead to distorted representations of academic progress in that field (e.g. in the evaluation of writing (Reeves 2014); teacher’s expectations of pupils (van den Bergh et al 2010); and access to higher education (Milkman et al. 2015)).
One might think that these sorts of biases are good candidates for erasure through technological interventions because they can be difficult to recognise (although they are not necessarily thereby unconscious, see Hahn et al. 2014) and controlling them requires continued effort and commitment (e.g. Holroyd and Kelly 2016).
I took the opportunity to imagine a future neuroscience – I follow others in the enhancement literature in thinking it's not too soon to consider the possibility and permissibility of such technology in advance (but see section 1 for an overview of developments that might lead to the kinds of capabilities I'm assuming here). Let's distinguish two ways in which we might use technology to intervene on cognitive biases.
When we think about the role that technology could play in enhancing cognition, much of the literature focuses on extending faculties that are already performing well, so that they perform even better. We also know that humans possess a range of cognitive biases which produce systematically distorted cognitions. Could we use technology to erase our cognitive biases? Should we?
In this paper I wanted to think about the specific threats that cognitive biases pose to learning and education, and focused on two commonly recognised types of cognitive bias in particular:
1. Confirmation bias, where people are more likely to accept information that conforms with their existing beliefs than that which contradicts them.
For example, political beliefs shape the interpretation of scientific research: people who are politically conservative are significantly more likely to believe that anthropogenic climate change is not happening (McCright and Dunlap 2011), an effect that likely extends to educators and researchers (Carlton et al. 2015).
That our beliefs and values mediate the information we find persuasive constitutes a considerable risk to education. Consider, for example, omitting meaningful study of colonialism from UK history curricula because it does not cohere with one’s notion of British values.
2. Social bias, where people make unfavourable judgments about someone and/or their accomplishments on the basis of their social identity.
This can cause us to discount the scholarly contributions of people who do not fit the social stereotype associated with their field of knowledge, which could lead to distorted representations of academic progress in that field (e.g. in the evaluation of writing (Reeves 2014); teacher’s expectations of pupils (van den Bergh et al 2010); and access to higher education (Milkman et al. 2015)).
One might think that these sorts of biases are good candidates for erasure through technological interventions because they can be difficult to recognise (although they are not necessarily thereby unconscious, see Hahn et al. 2014) and controlling them requires continued effort and commitment (e.g. Holroyd and Kelly 2016).
I took the opportunity to imagine a future neuroscience – I follow others in the enhancement literature in thinking it's not too soon to consider the possibility and permissibility of such technology in advance (but see section 1 for an overview of developments that might lead to the kinds of capabilities I'm assuming here). Let's distinguish two ways in which we might use technology to intervene on cognitive biases.
In the first, we halt the processes that produce distorted representations. In the paper, I suggest that this method is unhelpful, as it would immobilise a significant part of what the system does well – these processes have epistemic benefits and help us navigate large amounts of information.
The second kind of intervention targets particular associations in memory (if you’re familiar with it, imagine the machine used in Eternal Sunshine of the Spotless Mind which deletes specific memories). Kathy Puddifoot argues for the epistemic benefits of (effectively) immobilising our associations between, for example “men” and “science” due to their tendency to feature in so much processing that results in further distorted cognition down the line (2017). Could technology facilitate this process?
One might think that a futuristic device as in Eternal Sunshine used to selectively delete biases deals with the problems of otherwise requiring continued effort and commitment.
But, as I discuss in more depth in the paper, it also risks missing an important educational opportunity that comes for free with more mundane, effortful methods. Particularly in the case of social bias, the relevant social stereotypes are connected to deep and pervasive structural inequalities (Haslanger 2015).
If our technological intervention allows users to delete their biases without acknowledging their content, their source, or their part in perpetuating structural injustices, then it takes away an important opportunity to engage learners and educators with the aim of facilitating their recognition of the structures that constrain the trajectories of knowledge acquisition. We perhaps then should only use such technology in conjunction with the opportunity to engage with the wider social and historical context of bias. It might even increase motivation and lead to more stable outcomes over time.
Drawing on Srinivasan (2015), I say a bit more in the paper about why this isn’t just a moral aim in terms of who gets to be heard and participate in education and learning, but one that will ultimately improve knowledge acquisition: scholars who have homogenous social and cultural experiences may advance their discipline in fewer directions than a more heterogeneous workforce.
The second kind of intervention targets particular associations in memory (if you’re familiar with it, imagine the machine used in Eternal Sunshine of the Spotless Mind which deletes specific memories). Kathy Puddifoot argues for the epistemic benefits of (effectively) immobilising our associations between, for example “men” and “science” due to their tendency to feature in so much processing that results in further distorted cognition down the line (2017). Could technology facilitate this process?
One might think that a futuristic device as in Eternal Sunshine used to selectively delete biases deals with the problems of otherwise requiring continued effort and commitment.
But, as I discuss in more depth in the paper, it also risks missing an important educational opportunity that comes for free with more mundane, effortful methods. Particularly in the case of social bias, the relevant social stereotypes are connected to deep and pervasive structural inequalities (Haslanger 2015).
If our technological intervention allows users to delete their biases without acknowledging their content, their source, or their part in perpetuating structural injustices, then it takes away an important opportunity to engage learners and educators with the aim of facilitating their recognition of the structures that constrain the trajectories of knowledge acquisition. We perhaps then should only use such technology in conjunction with the opportunity to engage with the wider social and historical context of bias. It might even increase motivation and lead to more stable outcomes over time.
Drawing on Srinivasan (2015), I say a bit more in the paper about why this isn’t just a moral aim in terms of who gets to be heard and participate in education and learning, but one that will ultimately improve knowledge acquisition: scholars who have homogenous social and cultural experiences may advance their discipline in fewer directions than a more heterogeneous workforce.