Skip to main content

Explanation and Values

This post is by Matteo Colombo. When we asked our readers to vote for their favourite post among the five most popular posts we ever published, Matteo's "Explanatory Judgment, Moral Offense and Value-Free Science" (27 September 2016) won by a large margin. So, on the occasion of our 10th birthday, we invited him to write for us again and update us on his research.


Matteo Colombo

Seven years ago I wrote a piece for Imperfect Cognitions, where I described a study aimed at investigating the relationship between explanatory judgement, moral offense and the value-free ideal of science. Conducted in collaboration with psychologists Leandra Bucher and Yoel Inbar, our study showed that the more you perceive the conclusion of a scientific study as morally offensive, the more likely you are to reject it as bad science. For instance, to the extent you find the conclusion that males are naturally promiscuous while females are coy and choosy to be morally offensive, you’ll dismiss scientific reports supporting it as non-trustworthy, regardless of the prior credibility of this hypothesis and the relevant evidence.

In the intervening years, many occasions occurred to chat with friends, students, and acquaintances, about conspicuous scientific endeavours, including advances in our understanding of anthropogenic climate change, the development of sophisticated techniques for gene editing and cultured meat, the expansive influence of AI in our lives, the robustness of psychological research on implicit bias, the causes of police brutality, the rapid design of effective vaccines against COVID‑19. 

Often, I was confidently told things like “the climate has always changed”, “gene editing is immoral”, “AI is stealing our jobs and makes us dumber”, “it is morally problematic to claim that implicit bias is not a thing”, “vaccines against COVID‑19 are good just for big pharma.” While these judgements are imbued with value, and seemingly neglect or distort actual evidence, are they symptomatic of imperfect cognitions? In what ways? Do the people making them understand key concepts involved in value-laden science? Could their judgements about “offensive science” be ameliorated? How?

With developmental economist and philosopher Alexander Krauss, I explored some of these questions with a large experimental survey with about one thousand participants across different continents. Focusing on the concepts of climate change, healthy nutrition, poverty, and effective medical drug, we found that public understanding of these notions is limited, with older age and liberal political values being the strongest predictors of correctly understanding them. 

In particular, thick concepts like poverty and health are more accurately understood than descriptive concepts like anthropogenic climate change. Thus, the fact that many scientific concepts are evaluatively loaded doesn’t fully explain how explanatory judgements about “offensive science” might exhibit imperfect cognitions. Although different people in different contexts might use different concepts of explanation to make sense of scientific findings and their bearing on natural phenomena, our results also indicated an illusion of explanatory depth and a better-than-average effect in public understanding of value-laden science. Would then puncturing the illusion of explanatory depth ameliorate people’s imperfect cognitions?

I explored this question with psychologists Jan Voelkel and Mark Brandt in a study specifically aimed to test whether reducing people’s (over-)confidence in their own understanding of social and economic policies by puncturing their illusion of explanatory depth reduced their prejudice toward groups they perceive as having a worldview dissimilar from their own. We did not find support for this hypothesis, but exploratory analyses indicated that the hypothesized effect occurred for political moderates, but not for people who identified as strong liberals/conservatives.

So, maybe, cultivating intellectual humility is key for overcoming one’s prejudice and ameliorating “imperfect” explanatory judgements. Zhasmina Kostadinova, Kevin Strangmann and Lieke Houkes collaborated with Mark Brand and me to find out. Our study revealed that intellectually humble people exhibit lower levels of prejudice towards members of groups they perceive as dissimilar; surprisingly, however, it also showed that more intellectual humility was associated with more prejudice overall, which need not be symptomatic of imperfect cognition and is consistent with the role of cultivating intellectual humility for promoting responsible inquiry in the face of diversity and morally offensive science.

To clarify, broaden, and probe these findings, I am now collaborating with linguist Giovanni Cassani and philosopher Silvia Ivani to investigate how explanatory judgements about offensive science relate to differences in the way people process thick concepts compared to purely descriptive concepts, and to differences in their sensitivity to the potential consequences of scientific error. Stay tuned…

Let me conclude by expressing my gratitude to the readers and editors of Imperfect Cognitions for allowing this generous and undeserved spotlight on my on-going research on explanation and values, and my best wishes to Imperfect Cognitions for its 10th b-day. Ad maiora!

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph