Skip to main content

Post-Self-Deception Judgements

This post is by Martina Orlandi who is an Assistant Professor of Philosophy at Trent University Durham, Canada. Her research focuses on moral psychology, philosophy of mind (including philosophy of artificial intelligence), and philosophy of action. She has a specific interest in practical irrationality and particularly self-deception, self-control, and resilience.


Martina Orlandi


Suppose you’re having a conversation with your old friend Sasha. She casually tells you how her husband has been behaving lately: he’s getting calls at weird times of the day, he’s getting home later than usual, and last week Sasha saw a flirty text message show up on his phone. In spite of all this, Sasha insists that things are good between them and that her husband is faithful. You know that Sasha is self-deceived about this. Her self-deceit lasts for a few months until one day Sasha tells you that she left her husband after he admitted to having an affair. While this news doesn’t surprise you, what comes next is striking: Sasha’s confesses to have known the truth all along.

What should you make of Sasha’s post-self-deception confession? On the one hand, maybe she did know all along that her husband was unfaithful. On the other hand, Sasha has been self-deceived for months, so maybe she isn’t the best judge of her own mental states. These kinds of post-self-deception confessions are common in formerly self-deceived individuals, and they are the topic I explore in my paper “I Knew All Along: Making Sense of Post-Self-Deception Judgments” (published in Synthese in 2024).

Post-self-deception judgments are philosophically interesting because they can pose challenges for accounts of what self-deception is in the first place. Some theories say that self-deceived people only truly believe their own lies, while others say that they may suspect the unwelcome truth. But Sasha’s post-self-deception confession seems ill-suited to the former theories. That is, if Sasha didn’t believe the unwelcome truth, then why would she later confess otherwise?

I think this challenge can be resolved if we argue that, like the beliefs of self-deceived individuals, post-self-deception judgments may also not be reliable. This is the argument that I provide in my paper. I show that those theories of self-deception according to which the self-deceived does not believe the unwelcome truth can say that post-self-deception judgments are themselves an instance of self-deception. In particular, they are caused by the kind of hindsight bias known as “foreseeability”. This is where the individual believes that a past event could easily have been predicted. 

I argue that hindsight bias fits the structure of self-deception because self-deceived people believe against evidence that they would normally take as compelling. This is why Sasha would think, post-self-deception, that her husband’s unfaithfulness could have been easily predicted. It also shows why post-self-deception judgments are not reliable: hindsight bias, by definition, does not accurately track past experiences.

However, characterizing post-self-deception judgments as hindsight bias is bad news for the self-deceived in another way: it poses a threat to their ongoing epistemic practices. This is because psychological research suggests that hindsight bias can impair learning. When we (wrongly) think of the past as easily predictable, we don’t spend time trying to figure out exactly where and how we erred. 

As a result, this can lead us to repeat the same epistemic missteps over and over again. In light of this, I conclude that while coming out of self-deception is typically viewed as a good thing, when formerly self-deceived individuals confess to having ‘known all along’ they might actually be more vulnerable to future instances of self-deception.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph