Tuesday 8 November 2022

How Stable are Moral Judgments?

Today's post is by Paul Rehren at Utrecht University on his recent paper (co-authored with Walter Sinnott-Armstrong at Duke University) "How Stable are Moral Judgments?" (Review of Philosophy and Psychology 2022).


Paul Rehren

Psychologists and philosophers often work hand in hand to investigate many aspects of moral cognition. One issue has, however, been relatively neglected: the stability of moral judgments over time [but see, Helzer et al. 2017].

In our paper, Walter Sinnott-Armstrong and I argue that there are four main reasons for philosophers and psychologists to consider the stability of moral judgments. First, the stability of moral judgments can shed light on the role of moral values in moral judgments. Second, lay people seem to expect moral judgments to be stable in a way that contrasts with tastes. Third, philosophers also assume that their moral judgments do and should not change without reason. Finally, stability may have methodological implications for moral psychology.

Next, we report the results of a three-wave longitudinal study that probes the stability of one type of moral judgment: moral judgments about sacrificial dilemmas [see, Christensen and Gomila 2012]. In each wave (6-8 days apart), participants rated the extent to which they thought that individuals in a series of sacrificial dilemmas should or should not act. We then investigated how stable these ratings remained between the first and second wave using two different approaches. 

First, we found an overall test-retest correlation of .66. Second, we observed moderate to large proportions (M = 49%) of rating shifts (any change in rating between waves), and small to moderate proportions of rating revisions (M = 14%)—that is, the participants in question judged p in one wave, but did not judge p in the other wave.


Walter Sinnott-Armstrong

If our findings are not due to measurement error and so do shed light on a genuine feature of real-life moral judgment, then what explains the unstable moral judgments we observed? In our paper, we investigate three possible explanations, but do not find evidence for them. First, because sacrificial dilemmas are in a certain sense designed to be difficult, moral judgments about acts in these scenarios may give rise to much more instability than moral judgments about other scenarios or statements. 

However, we compared our test-retest correlations with a sampling of test-retest correlations from other instruments involving moral judgments, and sacrificial dilemmas did not stand out. Second, we did not find evidence that moral judgment changes occur because people are more confident in their moral judgments the second time around.

Third, we did not find evidence that rating changes were often due to participants changing their minds in light of reasons and reflection. We tested for this in a few different ways. Moral judgment changes between the first two waves did not tend to persist when participants judged the same dilemmas for a third time. 

Also, Actively Open-minded Thinking [Baron 1993], Need for Cognition [Cacioppo and Petty 1982] and scores on the Cognitive Reflection Test [Frederick 2005] all failed to predict whether participants changed their ratings. Last, participants who self-reported having changed their mind about at least one scenario because they thought more about the scenario or because they discussed the scenario with others accounted for only a small proportion of moral judgment changes.

We think that our findings of instability without reason may raise serious questions for moral philosophy (though of course, they do not finally settle any of these controversial issues). For example, many moral philosophers treat moral judgments about specific cases like sacrificial dilemmas as part of the “data of ethics” [Ross 2002, p. 41] when they use these judgments to choose among competing normative moral theories [e.g., Kamm 1993, Rawls 1971]. 

However, this data is unreliable when moral judgments change in the ways we observed, because incompatible moral judgments about the same act in the same circumstances cannot both be correct. Such a shifting foundation seems not to be a good place to build a moral theory if you want them to last. In addition, we also suggest that instability can create trouble for the meta-ethical theory known as intuitionism [e.g., Audi 2007, Huemer 2005], as well as for virtue theories of ethics [e.g., Aristotle, Nicomachean Ethics].

No comments:

Post a Comment

Comments are moderated.