Skip to main content

Cognitive Bias, Philosophy and Public Policy

This post is by Sophie Stammers, PhD student in Philosophy at King’s College London. Here she writes about two policy papers, Unintentional Bias in Court and Unintentional Bias in Forensic Investigation, written as part of a recent research fellowship at the Parliamentary Office of Science and Technology, supported by the Arts and Humanities Research Council.

The Parliamentary Office of Science and Technology (POST) provides accessible overviews of research from the sciences, prepared for general parliamentary use, many of which are also freely available. My papers are part of a recent research stream exploring how advances in science and technology interact with issues in crime and justice: it would seem that if there is one place where unbiased reasoning and fair judgement really matter, then it is in the justice system.

My research focuses on implicit cognition, and in particular, implicit social bias. I am interested in the extent to which implicit biases differ from other cognitions—whether implicit biases are a unified, novel class, or whether they may be accounted for by the cognitive categories to which we are already committed. So I was keen to get a flavour of how the general topic might be applied in a public policy environment, working under the guidance of POST’s seasoned science advisors.

You might wonder what a philosopher, especially one who is not doing anything empirical, might be doing in a science policy research role. A serviceable answer to that question is that the briefing papers do not aim to contribute new results, but to summarise key findings, in order to communicate these to parliamentarians. And I certainly enjoyed being able to share a coffee and interview researchers whose work I have followed for a while, rather than puzzling out my questions on my own, as is often the experience of doctoral researchers in the humanities. But it is my opinion that whilst producing valid empirical results is clearly the preserve of empirical researchers, understanding the significance of these results is a cross-disciplinary endeavour.

For example, a number of theorists have characterised cognitive biases as ‘unconscious’, but sometimes it is not clear exactly what is unconscious, and whether it could become conscious under some circumstances. Similar considerations arise when talking about control. We decided not to talk about cognitive biases as ‘unconscious’, or fundamentally ‘uncontrollable’, preferring instead to highlight their ‘unintentional’ characteristics. These are all substantive philosophical notions, with implications for how we conceptualise cognitive biases and envisage mitigating their effects. It seems especially important to get it right for an audience likely to be new to the topic, and who regularly make consequential policy decisions, as well as to avert unsound inferences that might be encouraged by overstatement or inaccurate description of the evidence. So, I think that there is both a space, and a need, for philosophical voices in evidence-based policy.

Some of the most notable results we came across included evidence that forensic examiners’ judgements are subject to contextual biases. Experimenters took fingerprints from real cases—prints which the forensic experts in question had previously identified—and presented them again, this time in the context of information which ought to be irrelevant to the matching decision. It turns out that the presence of routine contextual information (such as whether the suspect has an alibi), as well as knowledge of other examiners' decisions, can alter the judgements that experts make about whether fingerprints match. Fortunately, there are some preventative measures that forensic teams may take against these biases, as outlined in the second paper.

Other court participants are likely to be affected by cognitive biases: Mock jurors are biased by trial publicity and presuppositions about the justice system, complex questions on cross-examination may lead people with no intention to deceive to report events inaccurately, whilst feedback which corroborates a mistaken claim raises eyewitnesses certainty in that claim.

Whilst we did not lack results to include in the report, we did realise that there is relatively little UK based research on implicit social biases in the UK justice system, compared to other jurisdictions—the US, for instance. We came across stark statistics on differential treatment according to race and gender in the youth justice system, and whilst this in itself is not sufficient evidence that implicit bias plays any role, there is certainly room for further research on implicit bias regarding police work, the Crown Prosecution Service, and in court proceedings.

I was pleased to be able to meet with Lord Neuberger, the President of the Supreme Court, to discuss some of the results, during my time at POST. He was supportive of affording greater access to the justice system for researchers. Other stakeholders were less onboard with substantive research or changes to mitigate the effects of bias. So, there is still some work to do in terms of communicating exactly how widespread cognitive biases may be, and what we need to do about them to ensure that we endeavour to deliver fair representation and unbiased judgement in the justice system. Hopefully these papers open the conversation.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph