Skip to main content

How Stable are Moral Judgments?

Today's post is by Paul Rehren at Utrecht University on his recent paper (co-authored with Walter Sinnott-Armstrong at Duke University) "How Stable are Moral Judgments?" (Review of Philosophy and Psychology 2022).


Paul Rehren

Psychologists and philosophers often work hand in hand to investigate many aspects of moral cognition. One issue has, however, been relatively neglected: the stability of moral judgments over time [but see, Helzer et al. 2017].

In our paper, Walter Sinnott-Armstrong and I argue that there are four main reasons for philosophers and psychologists to consider the stability of moral judgments. First, the stability of moral judgments can shed light on the role of moral values in moral judgments. Second, lay people seem to expect moral judgments to be stable in a way that contrasts with tastes. Third, philosophers also assume that their moral judgments do and should not change without reason. Finally, stability may have methodological implications for moral psychology.

Next, we report the results of a three-wave longitudinal study that probes the stability of one type of moral judgment: moral judgments about sacrificial dilemmas [see, Christensen and Gomila 2012]. In each wave (6-8 days apart), participants rated the extent to which they thought that individuals in a series of sacrificial dilemmas should or should not act. We then investigated how stable these ratings remained between the first and second wave using two different approaches. 

First, we found an overall test-retest correlation of .66. Second, we observed moderate to large proportions (M = 49%) of rating shifts (any change in rating between waves), and small to moderate proportions of rating revisions (M = 14%)—that is, the participants in question judged p in one wave, but did not judge p in the other wave.


Walter Sinnott-Armstrong

If our findings are not due to measurement error and so do shed light on a genuine feature of real-life moral judgment, then what explains the unstable moral judgments we observed? In our paper, we investigate three possible explanations, but do not find evidence for them. First, because sacrificial dilemmas are in a certain sense designed to be difficult, moral judgments about acts in these scenarios may give rise to much more instability than moral judgments about other scenarios or statements. 

However, we compared our test-retest correlations with a sampling of test-retest correlations from other instruments involving moral judgments, and sacrificial dilemmas did not stand out. Second, we did not find evidence that moral judgment changes occur because people are more confident in their moral judgments the second time around.

Third, we did not find evidence that rating changes were often due to participants changing their minds in light of reasons and reflection. We tested for this in a few different ways. Moral judgment changes between the first two waves did not tend to persist when participants judged the same dilemmas for a third time. 

Also, Actively Open-minded Thinking [Baron 1993], Need for Cognition [Cacioppo and Petty 1982] and scores on the Cognitive Reflection Test [Frederick 2005] all failed to predict whether participants changed their ratings. Last, participants who self-reported having changed their mind about at least one scenario because they thought more about the scenario or because they discussed the scenario with others accounted for only a small proportion of moral judgment changes.

We think that our findings of instability without reason may raise serious questions for moral philosophy (though of course, they do not finally settle any of these controversial issues). For example, many moral philosophers treat moral judgments about specific cases like sacrificial dilemmas as part of the “data of ethics” [Ross 2002, p. 41] when they use these judgments to choose among competing normative moral theories [e.g., Kamm 1993, Rawls 1971]. 

However, this data is unreliable when moral judgments change in the ways we observed, because incompatible moral judgments about the same act in the same circumstances cannot both be correct. Such a shifting foundation seems not to be a good place to build a moral theory if you want them to last. In addition, we also suggest that instability can create trouble for the meta-ethical theory known as intuitionism [e.g., Audi 2007, Huemer 2005], as well as for virtue theories of ethics [e.g., Aristotle, Nicomachean Ethics].

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph