Tuesday 30 May 2023

Leaving the black box treatment of ignorance behind

Today's post is by Rik Peels. Rik is an Associate Professor in Philosophy and Religion & Theology at the Vrije Universiteit Amsterdam. He is currently leading a large research project funded by the European Research Council, on the epistemology and ethics of extreme beliefs.

He aims to synthesize empirical work with conceptual and normative approaches to fundamentalism, extremism, and conspiracy thinking.


For almost its entire history, philosophy has studied knowledge and understanding rather than ignorance. I see why: we seek to know and understand reality rather than be ignorant of it, at least for most things (privacy issues and the like may be an exception). And perhaps the tacit idea was that if we get a grip on knowledge and understanding, we thereby also have insight into the nature of ignorance, as ignorance is simply the lack of knowledge, or at least so it was thought. 

Even philosophical debates that appealed to ignorance, such as that about Socratic ignorance, negative theology, ignorance as an excuse, and white ignorance, were about the objects of ignorance (what we are ignorant of), and not about ignorance itself. As a result, ignorance has become a black box.

Rik Peels

In this book, I open that box and argue that all these tacit assumptions about ignorance are mistaken. Ignorance is not just the lack of knowledge: it is a highly complex, multi-layered notion that comes in numerous shapes and sizes. In a way ignorance is a notion far more complex than knowledge. To give an example: When one knows something, one has a justified true belief that a proposition is true (and some an anti-luck condition is met), or maybe something like knowledge-first epistemology is correct. 

But ignorance is much more varied: when one is ignorant, one can disbelieve a true proposition, one can suspend judgment on it, one can waver and not yet have adopted an attitude towards it, one can never have thought about it, or one may even lack the conceptual resources to consider it. 

Things get even more intriguing when we move to the realm of social epistemology: a group knows as a group when at least some members of the group have knowledge, particularly the operative members, but, remarkably, a group can be ignorant even when most or all members have knowledge. 

Imagine, for instance, that all twenty soldiers in an army unit witness an instance of sexual harassment. They are all individually convinced that what they see is morally wrong and in fact they know it. However, they do not dare to speak out and since nobody does, they think they are the only one in the group knowing that the misbehavior is morally wrong. They decide to keep it to themselves. As a consequence, the group carries on as it did before. It is not implausible to think that this is a case in which they all individually know of the moral wrongness of the act, yet as a group they are ignorant of that.

To better understand such cases, this book first develops an epistemology of ignorance and then applies it. By an ‘epistemology of ignorance’ I mean a theory that states what the nature of ignorance is (is it the lack of knowledge, the lack of understanding, or yet something else?), what kinds and varieties there are, what group ignorance is, and what it is for ignorance to come in degrees. I then show how this epistemology of ignorance provides crucial building blocks for solving various problems in philosophy and beyond. 

I address challenging questions regarding white ignorance, structural ignorance (intentionally keeping others ignorant), responsibility for ignorance, ignorance as an excuse, ignorance in education, and expressing one’s ignorance. 

In each case, we see that opening the black box of ignorance is fruitful. In fact, paradoxically, a full-blown epistemology of ignorance provides something that we have wanted all along: knowledge and understanding—but this time about ignorance itself.

Tuesday 23 May 2023

Explanation and Values

This post is by Matteo Colombo. When we asked our readers to vote for their favourite post among the five most popular posts we ever published, Matteo's "Explanatory Judgment, Moral Offense and Value-Free Science" (27 September 2016) won by a large margin. So, on the occasion of our 10th birthday, we invited him to write for us again and update us on his research.

Matteo Colombo

Seven years ago I wrote a piece for Imperfect Cognitions, where I described a study aimed at investigating the relationship between explanatory judgement, moral offense and the value-free ideal of science. Conducted in collaboration with psychologists Leandra Bucher and Yoel Inbar, our study showed that the more you perceive the conclusion of a scientific study as morally offensive, the more likely you are to reject it as bad science. For instance, to the extent you find the conclusion that males are naturally promiscuous while females are coy and choosy to be morally offensive, you’ll dismiss scientific reports supporting it as non-trustworthy, regardless of the prior credibility of this hypothesis and the relevant evidence.

In the intervening years, many occasions occurred to chat with friends, students, and acquaintances, about conspicuous scientific endeavours, including advances in our understanding of anthropogenic climate change, the development of sophisticated techniques for gene editing and cultured meat, the expansive influence of AI in our lives, the robustness of psychological research on implicit bias, the causes of police brutality, the rapid design of effective vaccines against COVID‑19. 

Often, I was confidently told things like “the climate has always changed”, “gene editing is immoral”, “AI is stealing our jobs and makes us dumber”, “it is morally problematic to claim that implicit bias is not a thing”, “vaccines against COVID‑19 are good just for big pharma.” While these judgements are imbued with value, and seemingly neglect or distort actual evidence, are they symptomatic of imperfect cognitions? In what ways? Do the people making them understand key concepts involved in value-laden science? Could their judgements about “offensive science” be ameliorated? How?

With developmental economist and philosopher Alexander Krauss, I explored some of these questions with a large experimental survey with about one thousand participants across different continents. Focusing on the concepts of climate change, healthy nutrition, poverty, and effective medical drug, we found that public understanding of these notions is limited, with older age and liberal political values being the strongest predictors of correctly understanding them. 

In particular, thick concepts like poverty and health are more accurately understood than descriptive concepts like anthropogenic climate change. Thus, the fact that many scientific concepts are evaluatively loaded doesn’t fully explain how explanatory judgements about “offensive science” might exhibit imperfect cognitions. Although different people in different contexts might use different concepts of explanation to make sense of scientific findings and their bearing on natural phenomena, our results also indicated an illusion of explanatory depth and a better-than-average effect in public understanding of value-laden science. Would then puncturing the illusion of explanatory depth ameliorate people’s imperfect cognitions?

I explored this question with psychologists Jan Voelkel and Mark Brandt in a study specifically aimed to test whether reducing people’s (over-)confidence in their own understanding of social and economic policies by puncturing their illusion of explanatory depth reduced their prejudice toward groups they perceive as having a worldview dissimilar from their own. We did not find support for this hypothesis, but exploratory analyses indicated that the hypothesized effect occurred for political moderates, but not for people who identified as strong liberals/conservatives.

So, maybe, cultivating intellectual humility is key for overcoming one’s prejudice and ameliorating “imperfect” explanatory judgements. Zhasmina Kostadinova, Kevin Strangmann and Lieke Houkes collaborated with Mark Brand and me to find out. Our study revealed that intellectually humble people exhibit lower levels of prejudice towards members of groups they perceive as dissimilar; surprisingly, however, it also showed that more intellectual humility was associated with more prejudice overall, which need not be symptomatic of imperfect cognition and is consistent with the role of cultivating intellectual humility for promoting responsible inquiry in the face of diversity and morally offensive science.

To clarify, broaden, and probe these findings, I am now collaborating with linguist Giovanni Cassani and philosopher Silvia Ivani to investigate how explanatory judgements about offensive science relate to differences in the way people process thick concepts compared to purely descriptive concepts, and to differences in their sensitivity to the potential consequences of scientific error. Stay tuned…

Let me conclude by expressing my gratitude to the readers and editors of Imperfect Cognitions for allowing this generous and undeserved spotlight on my on-going research on explanation and values, and my best wishes to Imperfect Cognitions for its 10th b-day. Ad maiora!

Tuesday 16 May 2023

The Resilient Beliefs Project

Today's post is an interview with Paolo Costa, who is a researcher at the Center for Religious Studies of the Bruno Kessler Foundation and leads the Resilient Beliefs project, and Eugenia Lancellotta, who is a postdoctoral researcher on the project. We talked about the Resilient Beliefs project. 

Paolo Costa

KMH: What is the 'Resilient Beliefs' project all about?

PC & EL: It is a collaborative program involving 9 researchers in philosophy and theology and three different institutions: the Fondazione Bruno Kessler in Trento, in Italy, and the Universities of Innsbruck and Brixen in Austria. It is about hyper-robust beliefs, so to speak. By “hyper-robust beliefs” I mean beliefs that are especially resistant to criticism and change induced by counterargument and counterevidence. Now, these beliefs are often seen as irrational, because we tend to link rationality with revisability, flexibility, adaptability, etc. 

But, of course, people who change their mind too easily may also be regarded as feeble-minded and we generally appreciate people who hold onto their epistemically and morally reasonable beliefs when faced with adverse or dreadful circumstances. Since at least Plato’s time, skepticism is known to have both a constructive and a destructive side. So, the question arises as to what distinguishes a “good” resilient belief from a “bad” resilient belief. In order to satisfactorily answer this question, the first thing you need to investigate is of course the source of such robustness, whether it is psychological, epistemological, ethical, educational, or whatever. We are especially interested in shedding light on these aspects of the overall issue. 

Eugenia Lancellotta

KMH: How did you become interested in this topic?

PC & EL: It all began with a concern about the seemingly distinctive nature of religious disagreement. Most of us tend to think that arguing about religion is an especially delicate matter. Religious beliefs seem to delimit an area where it is best to proceed with great caution so as not to stir up conflicts, which may occasionally become violent or socially disruptive. 

Now, if this is the case, what is it about religious beliefs that makes them so difficult to handle cognitively? Is it because they are basically delusional beliefs, as most non-believers think? Or is it because they belong to that deeper set of beliefs which shape people’s identity and frame their relationship to reality? Or is it just a matter of telling good beliefs apart from bad beliefs? 

From these questions the more general issue took shape, of belief resilience and of a possible theory thereof. Are we always dealing here with biased beliefs? And when we say “biased beliefs”, do we necessarily mean “bad beliefs”? 

KMH: What is important about the topic and what do you hope it will contribute to ongoing work/debates?

PC & EL: Let us give you an obvious example. If a democratic form of life is premised on the ability to strike the right balance between firm principles and sincere acceptance of the irreducible pluralism of beliefs and opinions in a modern society, then understanding more about the resilience of beliefs may indeed be crucial to our future. 

In our multidisciplinary project, we would like to take some steps toward this goal by bringing together epistemology, religious studies, theology – in short, we want to move from religious belief to understand something more about strongly valued beliefs in general. 

The areas in which we hope to make some significant scientific contributions are the nature of conspiracy theories, the function of dogma in Christianity, and affinities and differences between peer disagreement and religious disagreement or between religious beliefs and pathological delusions.

KMH: The project seems quite interdisciplinary, particularly with theology and religion. What do you think that brings to the project?

PC & EL: Yes, it is a deeply interdisciplinary science project. Not only in the sense that it tries to bring different scientific disciplines into dialogue, but that it also tries to exploit and enhance contrastive points of view on the same phenomenon. We might call these stances subjective and objective, personal and impersonal, or, more appropriately, emic and etic. 

This is the focus of the panel we have organized for the next edition of the EuARe annual conference, which will be devoted precisely to the dialectic between insider’s and outsider’s perspective in the study of religion. You need this kind of bifocal gaze to understand what lies behind our most resilient beliefs.

KMH: What are your future plans for the project?

PC & EL: The project is taking off in these very months. The first articulated contributions are beginning to take shape and, in some cases, see print. We are also already planning events, including the big final conference. To be updated on our activities and all the results of our research just visit our website

Tuesday 9 May 2023

Agent-Regret, Accidents, and Respect

Today`s post is by Jake Wojtowicz on recent paper "Agent-Regret, Accidents, and Respect" (The Journal of Ethics, 2023). Jake Wojtowicz earned his PhD from King's College London in 2019. He lives in Rochester, NY where he writes about the ethics and the philosophy of sport.

Jake Wojtowicz

Writing in The New Yorker, Alice Gregory talks about accidental killers and introduces a motorist, Patricia, who - temporarily blinded by the sunlight in her eyes - hit and killed a cyclist. It wasn’t her fault, but she spent time in the suicide unit and this has ruined her life. She even wrote to the state attorney asking to be criminally punished. 

Bernard Williams suggested someone in Patricia’s situation should feel “agent-regret”. This isn’t the guilt of the intentional or reckless wrongdoer, and it isn’t the regret of the bystander. In “Agent-regret, accidents, and respect”, I reflect on Patricia’s case to shed light on how we should think about someone who accidentally harms others. 

One way of understanding agent-regret is that it’s the regret that attaches to accidents. But Patricia was angered by friends who described what happened as an accident: “Yes, it was an accident… but, at the end of the day, I hit him, I took his life…No matter how much you want to dismiss it as an accident, I still feel responsible for it, and I am… I hit him! Why does nobody understand this?”

Yet it certainly was an accident, and why should anyone go to prison for an accident? Well, I think there is something revealing in Patricia’s desire to be punished, and it links to her annoyance at describing what happened as an accident. Prison is for agents

Being reminded that something was an accident can bring a deal of comfort - and it is important to make sure people like Patricia don’t blame themselves in the way they would if they had done something intentionally evil. But I think when we describe things as accidents we can unintentionally diminish the fact that somebody has done something. 

Thomas Nagel - in his Moral Luck, written alongside Williams’s piece - argues that there are two ways we can see our role in the world. On one hand, as mere things in the world; on the other hand as responsible agents. If we see what happened to Patricia as an accident, we run the risk of seeing her role as “swallowed up by the order of mere events”. We take her agency out of it. But this isn’t something that just happened to or through Patricia, rather it is something she did. Through no fault of her own, for sure, but it is nonetheless true that Patricia killed the motorcyclist. 

Paying heed to this does two things. Firstly, it lets us properly understand Patricia’s position, and it lets us help her in moving on - if she sees herself as an agent and we see her as an accident, we can’t do that. Secondly, it properly respects Patricia as an agent. Being an agent is central to our self-respect. If we downplay that in Patricia’s case, we run the risk of downplaying a central part of being human. 

Paying heed to the agency in agent-regret should help us better understand Patricia. She wants to be punished because she needs to be recognized as an agent. And in working out how she should move forward, and how we - as bystanders, friends, even victims - should treat her, we need to find a way to respond to her that respects her as an agent without lumping her in with the evildoers in the world. 

Tuesday 2 May 2023

Epistemic Coverage and Fake News

Today's post is by Shane Ryan at Singapore Management University, on his recent paper “Fake News, Epistemic Coverage and Trust”  (The Political Quarterly. 2021).

Shane Ryan

Is there any relationship between low levels of trust in mainstream media and belief in fake news? I argue that there is such a link. Before we get to why I think so, it’s important to clarify some of the important terms in the question.

What is fake news? There is lots of disagreement about how to analyse the term and even some, such as Habgood-Coote, who suggest we shouldn’t try to analyse the term in the first place. Mindful of the difficult discussion, I don’t propose a full analysis of fake news but instead I propose that fake news requires that Information is presented as news that falls short of the (procedural) standards for news.

What do I mean by trust? I argue trust requires that the trusting agent believes that the trusted agent has the competence to do whatever the trusting agent trusts the trusted agent to do and that the trusting agent believes that the trusted agent has goodwill with regard to whatever the trusting agent trusts them to do.

I argue that a lack of trust in media sources to report on newsworthy items, whether because of a lack of belief in their competence or goodwill, facilitates acceptance of fake news. This is because of how something called epistemic coverage works.

You might believe that President Joe Biden didn’t die a week ago, because you believe that had he died a week ago, then you would have already heard about it. In other words, you believe that your epistemic environment is such that if certain things happen, then you will be exposed to the information that they’ve happened, say on the basis that you believe that a US president dying is the kind of information that would be reported on mainstream news sites, and you regularly access such news sites. As a result, if someone posts on social media, or links to an unfamiliar site presented as a news site, that Biden died last week, then you’ll have a reason to dismiss the claim.

On the other hand, however, you might believe that your epistemic coverage is such that if certain other things happened, perhaps even things you regard as very newsworthy, you wouldn’t hear about it from the mainstream media. This opens the way for fake news in a way that wouldn’t be open if you trusted the mainstream media – this is not to suggest that you should always trust the mainstream media. In such a case, you lack the reason, based on your perception of your epistemic coverage, to dismiss the post from someone on social media or story from an unfamiliar site presented as a news site. 

This of course doesn’t entail that one will believe the story. It means, rather, that one is more susceptible to believing as a result of lacking trust in mainstream media. This consequence raises questions about how mainstream media might become more trustworthy and be perceived as such by diverse audiences.