This post is by Josh May, Associate Professor of Philosophy at the University of Alabama at Birmingham. He presents his book, Regard for Reason in the Moral Mind (OUP, 2018). May’s research lies primarily at the intersection of ethics and science. He received his PhD in philosophy from the University of California, Santa Barbara in 2011. Before taking a position at UAB, he spent 2 years teaching at Monash University in Melbourne, Australia.
My book is a scientifically-informed examination of moral judgment and moral motivation that ultimately argues for what I call optimistic rationalism, which contains empirical and normative theses. The empirical thesis is a form of (psychological) rationalism, which asserts that moral judgment and motivation are fundamentally driven by reasoning or inference. The normative thesis is cautiously optimistic, claiming that moral cognition and motivation are, in light of the science, in pretty good shape---at least, the empirical evidence doesn’t warrant sweeping debunking of either core aspect of the moral mind.
There are two key maneuvers I make to support these theses. First, we must recognize that reasoning/inference often occurs unconsciously. Many of our moral judgments are automatic and intuitive, but we shouldn’t conclude that they are driven merely by gut feelings, just because consciousdeliberation didn’t precede the judgment. Even with the replication crisis, the science clearly converges on the idea that most of our mental lives involve complex computation that isn’t always accessible to introspection and that heavily influences behavior. As it goes for judgments of geography, mathematics, and others’ mental states, so it goes for moral judgment. Indeed, the heart of the rationalist position is that moral cognition isn’t special in requiring emotion (conceived as distinct from reason), compared to beliefs about other topics. In the end, the reason/emotion dichotomy is dubious, but that supports the rationalist position, not sentimentalism.
Second, I argue that what influences our moral minds often looks irrelevant or extraneous at first glance but is less problematic upon further inspection. Sometimes the issue is that irrelevant factors hardly influence our moral thoughts or motivations once one digs into the details of the studies. For example, meta-analyses of framing effects and incidental feelings of disgust suggest they at best exert a small influence on a minority of our moral choices. Of course, some factors do substantially influence us but a proper understanding of them reveals that they’re morally relevant. For example, Greene distrusts our commonsense moral judgments that conflict with utilitarianism because they’re influenced by whether a harm is “prototypically violent.” But it turns out that involves harming actively, using personal contact, and as a means to an end, which together form a morally relevant factor; it’s not merely an aversion to pushing. Similarly, the well-established bystander effect shows that helping behavior is motivated by whether one perceives there to be any help necessary, but that’s a morally relevant consideration (contra Doris). After examining many kinds of influences, I build on some other work with Victor Kumar to develop a kind of dilemma for those who seek to empirically debunk many of our moral thoughts or motivations: the purportedly problematic influences are often either substantial or morally irrelevant but rarely both.
Another key move in the book is particularly relevant for this blog, as it draws from the literature on rationalization and motivated reasoning. Consider, for instance, that participants in a study were more likely to privately cheat only a little on a test because that’s all they could justify to themselves. In another study, publicly supporting environmentally-friendly products allowed participants to license cheating a bit afterwards. In various ways, we often rapidly and unconsciously rationalize bad behavior, and not merely because it brings personal gain but because we need a way tojustifyit to ourselves on moral grounds.
Others have made related points in different contexts about how rationalization can be motivated and even beneficial (see e.g. Summers; Bortolotti; Sie). But the focus is often on post hoc rationalization, whereas I emphasize what I call ante hoc rationalization, since it serves to justify the desired behavior before one engages in it. That’s important because it suggests that reasoning (even if poor reasoning) is a cause of the resulting judgment or motivation, not merely an effect.
In this way and others, I hope the book draws helpful connections among distinct literatures. Rather than emphasizing a handful of provocative studies, the book aims to synthesize a wide range of research in order to extract well-grounded lessons for moral psychology.