Wednesday 27 March 2024

Addressing Epistemic Injustice: Perspectives from Health Law and Bioethics

This post is by Lisa Bortolotti who reports on a symposium was organised by Mark Flear to explore interdisciplinary perspectives (law, philosophy of psychiatry, bioethics, sociology, and more) on epistemic injustice, hosted by City University on 15th September 2023. 

This is a report of some of the talks presented at the symposium. The other talks were given by Anna Drożdżowicz (on epistemic injustice and linguistic exclusion); Miranda Mourby (on reasonable expectations of privacy in healthcare); and Neil Maddox and Mark Flear (on epistemic injustice and separated human biomaterials). 

The City Law School, venue of the symposium

The first presentation was by David Archard (Queen’s University, Belfast) on lived experience and testimonial injustice. Lived experience is being increasingly used in debates on a number of controversial areas—as a source of special authority on a given subject. The appeal to lived experience often works in resisting claims that contradict lived experience. Is refusal to listen to lived experience a form of testimonial injustice? For Fricker, testimonial injustice when the speaker receives less credibility than they deserve. The credibility deficit is due to an identity prejudice in the hearer. Testimonial injustice can manifest in different forms (disbelief, ignoring, rejecting). 

Are statements of lived experience reliable? How do we establish that? What if the people with lived experience are deluded or mistaken about what has been experienced? Lived experience can be source of advice (consultative) or authority (authoritative). Reasons to consult are not necessarily reasons to consider lived experience authoritative. Also, there is an important difference between what lived experience is and what can be inferred from lived experience. Injustice is in not listening and not giving weight.

The second talk by Lisa Bortolotti focused on research with Kathleen Murphy-Hollies (both at the University of Birmingham) on curiosity as an antidote to epistemic injustice. Lisa and Kathleen talked about the complex history of curiosity in the philosophical literature from a sin to a virtue, and argued that curiosity can be both an epistemic virtue when people disposed to attain knowledge have some basic skills for pursuing curiosity, use their judgement, are well motivated, and find pleasure in the pursuit of curiosity. 

Lisa and Kathleen also suggested that curiosity can be a moral virtue when directed at other people as it can support enhanced mutual understanding. To argue their case, they discussed cases in which people’s experiences are contested and people’s views are marginalised and pathologized. In those cases, an interpreter being curious helps them better understand the speaker’s perspective. 

The third speaker was Jonathan Montgomery (University College London) discussed public reason and religious voices in judicial reasoning. Jonathan focused on cases where courts and parents disagree on whether life support should be stopped for children. Often parents are motivated by religious views in arguing that life support should continue. Other cases are where a medical treatment or intervention is not wanted by the family due to religious convictions (e.g., refusing a blood transfusion that may be life saving). 

Are the courts dismissive of parents’ perspectives? Is there a shared reality that is misunderstood by one party and not the other? How are credibility markers distributed? Jonathan reviewed a number of interesting and controversial cases where there are several epistemic issues at play, including risk assessment and disability discrimination. How to address these problems? 

One suggestion is to avoid the court and try mediation first, on the assumption that less epistemic injustice occurs in a mediation effort. Another suggestion is to think clearly about epistemic authority: is it medical competence or lived experience? Whose voice is going to be powerful in the given context? The presentation finished with a super interesting table detailing different ways of thinking about events as instances of epistemic injustice.

Next, Priscilla Alderson (University College London) focused on epistemic injustice in the context of children having major surgery. She reviewed how we moved from children themselves and parents too from being removed from care to important questions being raised about the role of parents and children in making healthcare decisions. Priscilla’s research with patients and surgeons suggests that it is key to obtain consent from children for surgery, even very young children. They can be explained what is happening to them—we can inform and involve them in the procedures and the reasons for them.

A famous case of conjoined twins was examined in some detail: a Senegalese father was pressurised into agreeing to surgery to separate his daughters after being told that one of them would not survive due to her weaker heart. In the BBC programme on this case there was a clear emphasis on medical expertise and undermining the parent’s view and there was absolutely no reference to what the twins thought or wanted. Even the ethics committee’s intervention was not helpful as it did not include the concern about how the surviving child would have felt after surgery, realising that she was alive because of her sister’s sacrifice.  Priscilla talked about the need for a more engaged and embodied bioethics.

After lunch, Magda Furgalska (York Law School) contextualised epistemic injustice within mental health law research. In Magda’s research with people who experience credibility deficits in legal context, she found that many participants were surprised that she did not require to see medical records or other evidence to corroborate what they were saying. And also, when she presented her work at conferences, audiences often questioned whether research participants did tell her the truth. 

This emerges clearly in the context of issues about insight. Mental health patients are often experiencing a catch-22. For patients, it is not just a question to recognise that they are ill but to comply with the clinicians’ view of their condition. So, if patients realise that they are ill and they should be going to hospital, then for the clinician they are not seriously ill and they shouldn’t be hospitalised. If they do not realise that they are ill and they don’t think they should go to hospital, then for the clinician they are seriously ill, and they should be hospitalised—and their report is not to be relied on anyway. 

Insight and capacity are often used interchangeably, and compliance is used to determine both insight and capacity. Deciding whether someone has capacity on the basis of whether they have insight, is a clear misapplication of the law, and also a case of silencing and testimonial harm as capacity is denied pre-emptively without being tested.

Magdalena Eitenberger (University of Vienna) discussed epistemic injustice in the area of chronic illness. Magdalena introduced the concept of “patho-curative epistemic injustice” to apply to diabetes and hepatitis C. This concept is drawn from the concept of patho-centric epistemic injustice developed by Havi Carel and Ian Kidd. 

The idea is that some people experience a credibility deficit due to their illness and hard facts are prioritised over lived experience reports. The new concept is supposed to concentrate on “curedness” and how in some cases of chronic illness an understanding in terms of being cured or fixed is not available. Biomedical models offer a reduced and simplistic conception of disease and health where problem-fixing is central. But more holistic therapeutic solutions are ignored.

This also results in patients not being able to talk about their experiences over and beyond the idea that a person’s body can be either fixed or damaged. What “cured” means is not how the person feels (whether they feel healthy) but what their glucose levels are. Lived experience is not considered relevant and this impacts healthcare policy and welfare too. The role of the person as someone who manages their health trajectory is also undermined if the person is given the (technological) resources to monitor their health.

Next, Swati Gola (University of Exeter) addressed epistemic injustice in the India’s traditional healthcare system. Indian system of medicine is very heterogeneous system, some indigenous and some introduced from abroad. There are a lot of folk traditions at the margins (such as healers) which have been sidelined as unscientific after the British occupation. How should we understand indigenous health traditions in the light of colonialism? Is there any epistemic injustice against those traditions?

Swati analysed the current situation in India, suggesting that knowledge colonialism is still a big problem, due to the dominance of the biomedical models and the power of the medical professions as seen through the lens of Western medicine. A case was made for epistemic justice to be essential to the decolonisation of knowledge and the decolonisation of the self via issues of hermeneutical injustice.

Wednesday 20 March 2024

Trust Responsibly

This post is by Jakob Ohlhorst, who is a postdoc fellow on the Extreme Beliefs project at Vrije Universiteit Amsterdam. This post is about his recent book, Trust Responsibly (Routledge), which is available open access as an e-book.

Jakob Ohlhorst

"Strange coincidence, that every man whose skull has been opened had a brain!"

'Trust responsibly' opens with this joke from Ludwig Wittgenstein. In On Certainty, he argued that some things we can only trust to be the case because any evidence which speaks in favour of the things we trust must already presuppose the things we trust. That everyone has a brain was a better example in the 1950s than it is now. This goes beyond trust in people. It also involves trust that the world is older than 100 years, trust that you are not in a coma and dreaming, and so on. I argue in my book that – to trust responsibly – we need virtues.

The problem with trust is, if you don’t need any evidence, then you could trust just about anything to be the case. You might trust that astrology is a good way to learn about people or that aliens are causing catastrophes with lasers from Mars. How do we tell good cases of trust from bad cases of trust? Giving completely up on trust is not an option; we would end up in total scepticism and cognitive paralysis. We could not do anything cognitive, not doubt, not believe, nor investigate. So we must at least be somewhat warranted to trust in our fundamental presuppositions.

I argue that we are warranted to trust in presuppositions that enable us to exercise our epistemic virtues. I explain my view of epistemic virtues in more detail here on Imperfect Cognitions, but essentially, they are the psychological resources that enable us to discover and gain knowledge, communicate it, and solve problems. Our virtues would not work if we did not trust them to work. We are therefore warranted to trust our virtues.

You might think: but wait, how can we know which of our psychological resources we can actually trust? How do we recognise virtues? If we possess certain reflective virtues like conscientiousness that allow us to evaluate our own thinking, then we can recognise which virtues are trustworthy. I argue that we are warranted to trust virtues on two conditions. First, we must be aware of the operation of the psychological processes that support the virtue – but we do not need to know that they are virtuous. Second, if we had these reflective virtues that allow us to evaluate our own thinking, then we would recognise them as virtues. When these two conditions are satisfied, our trust in a virtue is responsible and warranted.

To illustrate this, consider a rabbit’s flight response. It is hyper-sensitive, it will detect danger where there is none, thus the flight response is no epistemic virtue. If – through some miracle – the rabbit acquired reflective virtues and started thinking about the response, it would realise that it is unreliable and hence stop trusting it. Therefore, the rabbit is not warranted to trust the response. Still, the rabbit has other simple virtues that it is warranted to trust, say its ability to recognise food.

Friday 15 March 2024

Disentangling the relationship between conspiratorial beliefs and cognitive styles

This post is by Biljana Gjoneska, who is is a national representative and research associate from the Macedonian Academy of Sciences and Arts. Here, she discusses her paper in the Psychology of Pseudoscience special issue introduced last week, and is the second post this week in this series on papers in this special issue. 

Biljana investigates the behavioural aspects (conspiracy beliefs) and mental health aspects (internet addiction) of problematic internet use. She has served in a capacity as a national representative for the EU COST Action on “Comparative Analysis of Conspiracy Theories” and has authored, reviewed and edited numerous scientific outputs on the topic. The most recent topical issue can be seen here.

Biljana Gjoneska

In my article for this special issue in Frontiers, I offer an integrated view on the relationship between conspiratorial beliefs (that secret and malevolent plots are forged by scheming groups or individuals) and three distinct cognitive styles (analytic thinking, critical thinking and scientific reasoning). To best illustrate my reasoning and the theoretical conceptualizations, I will draw from personal experience and contemplate one (seemingly) unrelated situation:

Prior to writing this post, I received another invitation to summarize my study for a popular outlet. The invitation was sent by email from an unknown address. The sender claimed to be a freelancer journalist, who is writing a piece for the New York Times Magazine, and is interested to learn more about the reasons why some people seem more prone to endorse conspiracy theories.

As scientists, we receive various sorts of daily invitations that are related to our work (to review articles, contribute to special issues, join editorial boards among others), many of which prove to be false, or seven predatory. So, I first aimed to to understand whether the person and the invitation are real, realistic and reliable. Hence, I employed my analytic thinking (which is slow, deliberate and effortful) to conduct a comprehensive search and gather information from verifiable sources. In essence, analytic thinking helped me to discern fact from fiction in my everyday processing of information.

Once I realized that the invitation seems credible, I needed to make decision whether to accept it. For this, I had to remain open and willing to (re)consider, (re)appraise, review and interpret facts, as a way to update my prior beliefs associated with similar experiences (e.g., with seemingly exaggerated claims and invitations received by email), In short: I employed critical thinking, as a way to decide whether to believe or not certain information. Critical thinking is essential when making judgments and daily decisions. It is only then, that I proceeded to accept the invitation.

Once I made the decision to accept the invitation, I started to anticipate the topics of discussion, as a way to improve the overall quality of the planned conversation. In doing so, I employed my scientific reasoning competencies (relying on induction, deduction, analogy, causal reasoning), for the purposes of scientific inquiry (hypothesizing on the cause for the invitation, and the possible outcomes of the conversation). In short, I relied on my scientific reasoning in an attempt to gain wholesome understanding of the observed subject matter by solving problems and finding solutions.

With this, I conclude my presentation on the three cognitive styles that are covered in my perspective article. Analytic thinking, critical thinking and scientific reasoning, are all guided by rationality and goals for reliable information processing, decision making, and problem solving. All three rely to a different extent, on our thinking dispositions, metacognitive strategies, and advanced cognitive skills. As such they comprise a tripartite model of the reflective mind (that builds on the tripartite model of mind by Stanovich & Stanovich. 2010).

Importantly, a failure in any of these domains might be associated with an increased tendency to endorse conspiratorial beliefs or other pseudoscientific claims. This explains why, in certain instances, people with high cognitive abilities, or even advanced analytic thinking capacities, remain ‘susceptible’ toward conspiratorial beliefs. At the moment, there is ample evidence to support the link between the analytic thinking and the (resistance to) conspiratorial beliefs, while the literature on the latter two categories remains scarce.

In closing of this post, I will refer back to the original story that served to illustrate my key points. Namely, a poignant piece of writing stemmed from the conversations with the scientists who contributed to this special issue, and was published in the New York Times Magazine. It tells a story of verified scientists who became proponents of a disputed theory, using scientific means (arguments but also publishing venues) to advance their claims. This piece contemplates on the possibilities for a failed scientific reasoning, and highlights the associated risks. Needless to say, they are quite dangerous, because they might heavily blur the lines between fact and fiction, leaving a sense of shattered reality in so many people.

Wednesday 13 March 2024

Stakes of knowing the truth: the case of a “miracle” treatment against Covid-19

Tiffany Morisseau is a researcher in Cognitive Psychology at the Laboratory of Applied Psychology and Ergonomics (LaPEA, University of Paris). Her current research projects mainly focus on the question of epistemic trust and vigilance, and the socio-cognitive mechanisms underlying how people come to process scientific information.

Tiffany is a member of the Horizon Europe KT4D consortium KT4D (, on the risks and potential of knowledge technologies for democracy, and leads the Psychology part. Here, she talks about her paper in the Philosophy of Pseudoscience special issue, introduced last week by editor Stefaan Blancke.

Tiffany Morisseau

Improving science education and media literacy is an important aspect of dealing with online misinformation. By doing so, the level of accuracy at which information is considered false is raised, thereby ensuring that blatant errors that are no longer perceived as plausible, are eliminated from the public sphere. But merely being plausible is not a sufficient condition for information to be valid! Information can be both plausible and false, and the likelihood of it being true must be critically assessed. 

This requires some cognitive effort, especially when it comes to complex scientific information that is not easily accessible to the public at large. From an individual point of view, engaging in such an investigation is only worthwhile if the stakes of knowing the truth are high enough. Significant efforts in media and science education may therefore not be enough: one can consume and share false facts while being highly educated, for reasons other than the search for truth.

In our paper (Morisseau, Branch & Origgi, 2021) published in this special issue in Frontiers, we illustrated this with the example of hydroxychloroquine, which has been considered as a potential treatment for Covid-19 and has been the focus of much media and popular interest, particularly in France. 

Professor Didier Raoult and his team at the IHU Méditerranée Infection (Marseille) had reported positive results from a study on the effect of HCQ against Covid-19, in March 2020 (Gautret et al., 2020). Although relatively unknown to the general public a few months earlier, Raoult was becoming increasingly popular. But in the weeks and months that followed, many questioned the assumption that HCQ was actually useful against Covid-19, with scientific consensus soon emerging that it was not effective. 

However, HCQ remained very popular with the public. What was the reason? Let us try to answer this question. To begin with, the hypothesis was certainly plausible, so it was cognitively and socially acceptable to hold it as true.

Secondly, holding the efficacy of HCQ to be true had many benefits, allowing for the satisfaction of a number of social and psychological motivations - from understanding the world (Lantian et al., 2021) to protecting one's identity (Nera et al, 2021; Nyhan and Reifler, 2019), as well as social integration and reputation management (Baumeister and Leary, 1995; Dunbar, 2012; Graeupner and Coman, 2017; Mercier, 2020). 

In particular, the promotion of HCQ has been strongly associated with an attitude of distrust towards French elites, perceived as arrogant and disrespectful of popular practices and lifestyles (Sayare, 2020). The appeal to popular common sense and pragmatism, as opposed to experts suspected of being disconnected from the field with their complicated methodologies, has also been used by politicians to justify pro-HCQ positions (Risch, 2020). 

But when its objective (in this case, the promotion of a political stance) moves away from the transmission of information per se, communication ceases to be associated with a strong presumption of truthfulness (Lynch, 2004; Cassam, 2018).

Of course, it is important to use accurate information when making decisions that rely on it. But in this particular case, neither the efficacy of the drug nor its actual adverse effects were paramount. First, the virus was initially perceived as posing little threat to healthy adults and children (Baud et al., 2020), and the question of whether HCQ was actually effective was ultimately of minor importance to most people. 

Secondly, the risks associated with taking HCQ were perceived as very low anyway. Many Covid-19 patients testified to the innocuous nature of the treatment, and the question of its dangerousness at the population level was not so relevant at the individual level.

More generally, we live with many false or approximate beliefs anyway (Boyer, 2018; Oliver and Wood, 2018). This is not necessarily a problem as such, if these beliefs do not lead individuals to make choices against their own interests, or against the interests of society at large. But precisely, the building of a science-based consensus shared by all members of a society is essential to create the conditions for translating this knowledge into effective policies. 

When “superficial” opinions – i.e., opinions that do not have a strong epistemic basis – enter the public sphere (in April 2020, a poll published in the newspaper Le Parisien claimed that “59% of the French population believed HCQ was effective against the new coronavirus”), they influence the way societal issues are conceived. 

This can negatively affect the quality of policy decisions that are made, with concrete consequences for people's well-being. Public opinions on scientific issues must therefore be interpreted at the right level, especially as they will determine major political and societal choices.

Wednesday 6 March 2024

The Psychology of Pseudoscience

Stefaan Blancke is a philosopher of science at the department of Philosophy at Tilburg University in the Netherlands and a member of the Tilburg Center for Moral Philosophy, Epistemology and Philosophy of Science (TiLPS). 

His current research mainly focuses on the role of cooperation and reputation in science, pseudoscience, and morality. His website is; you can also find him on Twitter (@stblancke). This post is about a special issue on the Psychology of Pseudoscience, which Stefaan was an editor for. 

Stefaan Blancke

As a philosopher of science, I have since long been interested in pseudoscience. Not only because pseudoscience induces us to think about what science is – so that we can explain why pseudoscience is not science; but also, because I want to understand what makes our minds vulnerable to beliefs that plainly contradict our best scientific theories. Examples of pseudoscience abound, from creationism over homeopathy and anti-vaccination to telepathy. Given that we should expect the mind to reliably represent the world this is surprising. Why do so many people cherish weird beliefs?

To answer this question, we must first understand the human mind, which inevitably brings us to the domain of psychology. Building on research in evolutionary and cognitive psychology and anthropology we can assume that pseudoscientific beliefs tend to become widespread because they tap into our evolved intuitive expectations about the world. These intuitions are in place because they allow us to effectively navigate our surroundings. 

However, they also create biases by which we are disposed to adopt beliefs that conflict with a scientific understanding of the world. Creationism, for instance, taps into our psychological essentialism and teleological intuitions, whereas mechanisms for pathogen detection and aversion make us suspicious of and even oppose modern technologies such as genetic modification. Their intuitive appeal makes these beliefs contagious. Furthermore, pseudoscientific beliefs also adopt the trappings of science to piggyback on the epistemic and cultural authority of science. This study of the spread of pseudoscientific beliefs has resulted in an epidemiology of pseudoscience.

In line with this research on the frailness of the human mind I, together with a team of fellow philosophers and psychologists, edited a special collection on the psychology of pseudoscience for Frontiers in Psychology. The collection consists of four contributions each of which sheds a new light on a different aspect relating to the central theme. As three out of the four articles will be presented in more detail by the authors, I will just briefly introduce them here. Tiffany Morisseau, T.Y. Branch, and Gloria Origgi discuss how people often use scientific information for social purposes which makes them less concerned about the accuracy than the plausibility of the information. 

This allows controversial scientific theories to spread. Joffrey Fuhrer, Florian Cava, Nicolas Gauvrit, and Sebastian Dieguez provide a conceptual analysis of pseudo-expertise, a phenomenon notoriously common in pseudoscience. The authors also develop a framework for further research. Biljana Gjoneska investigates how the cognitive styles of analytic thinking, critical thinking and scientific reasoning relate to (dis)trust in conspiratorial beliefs. And, finally, in an article not presented here, Spencer Mermelstein and Tamsin C. German argue that counterintuitive pseudoscientific beliefs spread because they play into our communication evolution mechanisms.

I heartily recommend reading next week's post from Tiffany Morisseau on her paper in the issue, and consulting the articles of our collection. I hope you enjoy the read!