Skip to main content

Regard for Reason in the Moral Mind

This post is by Josh May, Associate Professor of Philosophy at the University of Alabama at Birmingham. He presents his book, Regard for Reason in the Moral Mind (OUP, 2018). May’s research lies primarily at the intersection of ethics and science. He received his PhD in philosophy from the University of California, Santa Barbara in 2011. Before taking a position at UAB, he spent 2 years teaching at Monash University in Melbourne, Australia.




My book is a scientifically-informed examination of moral judgment and moral motivation that ultimately argues for what I call optimistic rationalism, which contains empirical and normative theses. The empirical thesis is a form of (psychological) rationalism, which asserts that moral judgment and motivation are fundamentally driven by reasoning or inference. The normative thesis is cautiously optimistic, claiming that moral cognition and motivation are, in light of the science, in pretty good shape---at least, the empirical evidence doesn’t warrant sweeping debunking of either core aspect of the moral mind.

There are two key maneuvers I make to support these theses. First, we must recognize that reasoning/inference often occurs unconsciously. Many of our moral judgments are automatic and intuitive, but we shouldn’t conclude that they are driven merely by gut feelings, just because consciousdeliberation didn’t precede the judgment. Even with the replication crisis, the science clearly converges on the idea that most of our mental lives involve complex computation that isn’t always accessible to introspection and that heavily influences behavior. As it goes for judgments of geography, mathematics, and others’ mental states, so it goes for moral judgment. Indeed, the heart of the rationalist position is that moral cognition isn’t special in requiring emotion (conceived as distinct from reason), compared to beliefs about other topics. In the end, the reason/emotion dichotomy is dubious, but that supports the rationalist position, not sentimentalism.

Second, I argue that what influences our moral minds often looks irrelevant or extraneous at first glance but is less problematic upon further inspection. Sometimes the issue is that irrelevant factors hardly influence our moral thoughts or motivations once one digs into the details of the studies. For example, meta-analyses of framing effects and incidental feelings of disgust suggest they at best exert a small influence on a minority of our moral choices. Of course, some factors do substantially influence us but a proper understanding of them reveals that they’re morally relevant. For example, Greene distrusts our commonsense moral judgments that conflict with utilitarianism because they’re influenced by whether a harm is “prototypically violent.” But it turns out that involves harming actively, using personal contact, and as a means to an end, which together form a morally relevant factor; it’s not merely an aversion to pushing. Similarly, the well-established bystander effect shows that helping behavior is motivated by whether one perceives there to be any help necessary, but that’s a morally relevant consideration (contra Doris). After examining many kinds of influences, I build on some other work with Victor Kumar to develop a kind of dilemma for those who seek to empirically debunk many of our moral thoughts or motivations: the purportedly problematic influences are often either substantial or morally irrelevant but rarely both.

Another key move in the book is particularly relevant for this blog, as it draws from the literature on rationalization and motivated reasoning. Consider, for instance, that participants in a study were more likely to privately cheat only a little on a test because that’s all they could justify to themselves. In another study, publicly supporting environmentally-friendly products allowed participants to license cheating a bit afterwards. In various ways, we often rapidly and unconsciously rationalize bad behavior, and not merely because it brings personal gain but because we need a way tojustifyit to ourselves on moral grounds.

Others have made related points in different contexts about how rationalization can be motivated and even beneficial (see e.g. SummersBortolottiSie). But the focus is often on post hoc rationalization, whereas I emphasize what I call ante hoc rationalization, since it serves to justify the desired behavior before one engages in it. That’s important because it suggests that reasoning (even if poor reasoning) is a cause of the resulting judgment or motivation, not merely an effect.

In this way and others, I hope the book draws helpful connections among distinct literatures. Rather than emphasizing a handful of provocative studies, the book aims to synthesize a wide range of research in order to extract well-grounded lessons for moral psychology.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...