Skip to main content

Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization

Today's post is by Jonathan Ellis, Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel, Professor of Philosophy at the University of California, Riverside. This is the second in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences, eds. J. F. Bonnefon and B. Trémolière, (Psychology Press, 2017) (part one can be found here).




Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. This week we’ll argue that moral and philosophical topics are especially fertile grounds for rationalization.

Here’s one way of thinking about it: Rationalization, like crime, requires a motive and an opportunity. Ethics and philosophy provide plenty of both.

Regarding motive: Not everyone cares about every moral and philosophical issue of course. But we all have some moral and philosophical issues that are near to our hearts – for reasons of cultural or religious identity, or personal self-conception, or for self-serving reasons, or because it’s comfortable, exciting, or otherwise appealing to see the world in a certain way.

On day one of their philosophy classes, students are often already attracted to certain types of views and repulsed by others. They like the traditional and conservative, or they prefer the rebellious and exploratory; they like confirmations of certainty and order, or they prefer the chaotic and skeptical; they like moderation and common sense, or they prefer the excitement of the radical and unintuitive. Some positions fit with their pre-existing cultural and political identities better than others. Some positions are favored by their teachers and elders – and that’s attractive to some, and provokes rebellious contrarianism in others. Some moral conclusions may be attractively convenient, while others might require unpleasant contrition or behavior change.

The motive is there. So is the opportunity. Philosophical and moral questions rarely admit of straightforward proof or refutation, or a clear standard of correctness. Instead, they open into a complexity of considerations, which themselves do not admit of straightforward proof and which offer many loci for rationalization.

These loci are so plentiful and diverse! Moral and philosophical arguments, for instance, often turn crucially on a “sense of plausibility” (Kornblith, 1999); or on one’s judgment of the force of a particular reason, or the significance of a consideration. Methodological judgments are likewise fundamental in philosophical and moral thinking: What argumentative tacks should you first explore? How much critical attention should you pay to your pre-theoretic beliefs, and their sources, and which ones, in which respects? How much should you trust your intuitive judgments versus more explicitly reasoned responses? Which other philosophers, and which scientists (if any), should you regard as authorities whose judgments carry weight with you, and on which topics, and how much?

These questions are usually answered only implicitly, revealed in your choices about what to believe and what to doubt, what to read, what to take seriously and what to set aside. Even where they are answered explicitly, they lack a clear set of criteria by which to answer them definitively. And so, if people’s preferences can influence their perceptual judgments (including possibly of size, color, and distance: Balcetis and Dunning 2006, 2007, 2010) what is remembered (Kunda 1990; Mele 2001), what hypotheses are envisioned (Trope and Liberman 1997), what one attends to and for how long (Lord et al. 1979; Nickerson 1998) . . . it is no leap to assume that they can influence the myriad implicit judgments, intuitions, and choices involved in moral and philosophical reasoning.

Furthermore, patterns of bias can compound across several questions, so that with many loci for bias to enter, the person who is only slightly biased in each of a variety of junctures in a line of reasoning can ultimately come to a very different conclusion than would someone who was not biased in the same way. Rationalization can operate by way of a series or network of “micro-instances” of motivated reasoning that together have a major amplificatory effect (synchronically, diachronically, or both), or by influencing you mightily at a crucial step (Ellis, manuscript).

We believe that these considerations, taken together with the considerations we advanced last week about the likely inability of intelligence, vigilance, and expertise to effectively protect us against rationalization, support the following conclusion: Few if any of us should confidently maintain that our moral and philosophical reasoning is not substantially tainted by significant, epistemically troubling degrees of rationalization. This is of course one possible explanation of the seeming intractability of philosophical disagreement.

Or perhaps we the authors of the post are the ones rationalizing; perhaps we are, for some reason, drawn toward a certain type of pessimism about the rationality of philosophers, and we have sought and evaluated evidence and arguments toward this conclusion in a badly biased manner? Um…. No way. We have reviewed our reasoning and are sure that we were not affected by our preferences....

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph