Tuesday 20 December 2016

Post Hoc Ergo Propter Hoc: Some Benefits of Rationalization

Jesse Summers (pictured above) is Adjunct Assistant Professor at Duke University, where he is also a Post-Doctoral Fellow at the Kenan Institute for Ethics, and a Lecturing Fellow for the Thompson Writing Program. In this post he writes about rationalization and some of its benefits, summarising his paper "Post Hoc Ergo Propter Hoc: Some Benefits of Rationalisation", which is forthcoming in a special issue of Philosophical Explorations on false but useful beliefs. The special issue is guest edited by Lisa Bortolotti and Ema Sullivan-Bissett and is inspired by project PERFECT's interests in belief.

You really shouldn’t trust me. At the very least, you shouldn’t trust me when I tell you why I’ve acted.

Part of the reason you shouldn’t trust me is that I often—much more often than I realize—don’t know why I’m doing something. The neuroscientist tells you that my brain predisposes me to act. Psychologists, too, assume that many factors and forces move me—my mood, habits from my youth, my environment, etc.—and I cannot hope to understand the way all of them influence me. And our folk psychological explanations of each other’s actions change how we praise and blame each other: “I’ll tell you why she really cancelled her trip to see you…”

Not only am I ignorant, but, despite that, I confidently explain my own actions. I confidently and sincerely explain why I left my current job, though no one else believes the explanation. It’s not just the neuroscientist and the psychologist who doubt my explanation: so does everyone who knows me well.

So you shouldn’t trust me when I tell you why I’m doing something, or why I did something, because I rationalize and confabulate: I offer a sincere explanation of my action that is nevertheless a much worse explanation of my action than an alternative one. And you do this, too, and so does everyone else. We’re not liars, but, if we want to know why we do what we do, we shouldn’t trust our own explanations.

These rationalized explanations aren’t bad only when they’re false. Some explanations are bad even when they’re true: a guest tells you that he’s abruptly leaving your party to see a friend. That explanation is true, but this explanation, which is also true, is better: the friend he’s going to see is his drug dealer, and he’s an addict.

Given that we rationalize our actions and offer explanations of our actions despite our ignorance, maybe we should stop reporting—speculating, really—about why we’re acting? You ask me why I bought this book, why I don’t eat meat, or why I just sprinted naked across the courtyard, and, despite whatever I sincerely believe were my reasons, I should shrug and say, “Who knows why any of us do anything?”

What would be lost if we stopped looking for explanations of our own and others’ actions? I argue that at least two benefits of rationalization would be lost. And, while the costs of rationalization may still outweigh the benefits, these benefits are worth noticing since they reveal that action explanations do more than simply report (or attempt to report) the truth about one’s motivations.

The first benefit of rationalization is that it allows us to work out for ourselves what are good reasons for acting, on which we can act consistently. When I tell you why I’m doing something, I’m not only offering a causal explanation, but I’m also providing some justification for my doing it. If I say I bought this book because I want to mark it up as I read it, I’m also saying that this is a good justification for buying a book. It needn’t be a sufficient justification, of course. Reasons against buying the book abound. But in offering my reason for buying it, I’m endorsing this justification as a good (partial) justification. It’s a good justification in the way that “because it’s purple” isn’t even a partial justification for buying the book. (Unless one is Prince.) But offering this justification of one act is also to endorse the justification in general. So the first benefit of rationalization is that it allows us to work out good reasons for acting, justifications on which we can act consistently.

The second benefit is that it makes ordinary actions meaningful in a way that they would not be if we withheld rationalizations. When I say that I didn’t eat meat because of ethical concerns (instead of, say, environmental concerns, or aesthetic concerns), it gives my action more meaning than if I’d said, “I don’t know why I don’t eat meat: I just don’t.” And, over time, this explanation can even be self-fulfilling: what started for flimsy and superficial reasons (my best friend stopped eating meat) may change as I rationalize why I do it, come to see the force of the reasons I’m citing in my rationalizations, and end by genuinely acting for those very reasons.


  1. ", come to see the force of the reasons I’m citing in my rationalizations, and end by genuinely acting for those very reasons."
    how would we know that this is happening/possible?

  2. Based on personal experience, rationalization in politics usually involves distorting uncomfortable fact and logic to some non-trivial extent. Ideologically, morally and socially acceptable fact and reasoning are generally accepted without much distortion or critical assessment. That seems to be the normal state of affairs for the human mind.

    In terms of human history, that biology of mind gave us modern civilization, all-out war, racial and religious conflict and all the rest, good and bad. In terms of the bad, my feeble grasp of history suggests that rationalization tends to play a dominant role, e.g., wars are justified by vilifying the enemy using usually untrue characterizations, playing on racial or religious differences. Things that build civilization, good things, e.g., science research and devising efficient ways of doing business, tend to involve less rationalization and thus less fact and logic detachment.

    Since modern technology can fairly easily destroy modern civilization and possibly the human species, one can argue that if one could reduce rationalization in politics (the source of most bad things) then maybe the odds of long-term human well-being (including civilization) and survival would go up somewhat.

    The question then asks if it is possible for humans to at least partially overcome their innate tendency to rationalize in politics. If not, we are going to survive or self-annihilate on the biological merits without any anti-rationalizing benefits flowing from cognitive and social science (or any other kind of science).


Comments are moderated.