Skip to main content

Is Unrealistic Optimism an Adaptation?




We humans have a well-established tendency to be overly optimistic about our future and to think that the risk of bad things happening to us is lower than is likely, while we think that the chance of good things happening to us is higher than is likely. Why is this case? What drives these positive illusions?

There are two possible ways in which we can understand and try to answer these questions. We can either look at the causal mechanisms underlying unrealistic optimism, or we can ask why this feature has survived and spread through human populations. Evolutionary psychology aims to answer the second question, in essence claiming that we are unrealistically optimistic because this has had benefits in terms of survival and reproduction.

So why should it be adaptive to have systematically skewed beliefs, which are frequently unwarranted and/or false?  Martie Haselton and Daniel Nettle have argued that unrealistic optimism is a form of error management, it helps us make the least costly error in situations of decision making under uncertainty.

Error management theory holds that when making decisions in contexts of uncertainty, we should err on the side of making low cost, high benefit errors and that this strategy can at times outperform unbiased decision making (cf. Haselton and Nettle 2006). This is nicely illustrated by the now well-known fire alarm analogy. If a fire alarm is set at a slightly too sensitive setting, we will have the inconvenience of having to turn it off when the toast has burnt every once in a while. If it is set at a more insensitive setting, we run the risk of burning alive in our beds because the alarm was activated too late. The over-sensitivity of the fire alarm brings only low costs (annoyance), but high rewards (reducing risk of death). 

This model of the selectional benefits of unrealistic optimism is committed to the claim that we should only be unrealistically optimistic in situations where potential payoffs for action are high and costs of failed action are low. If individuals were unrealistically optimistic in high cost/low benefit scenarios, this would decrease their chances of survival and reproduction. Does unrealistic optimism conform to this pattern?


I would like to argue that there are conceptual issues which make it impossible to establish whether we are faced with a low cost/high benefit scenario in many cases where we display unrealistic optimism and that in as far as we have empirical evidence, much of it speaks against the error management hypothesis.

According to error management theory, unrealistic optimism is beneficial because it leads to the belief that a desirable effect is achievable or an undesirable effect is avoidable, and this makes us more likely to take steps to achieve it or avoid it. However, here’s the rub: in order for this to work, it needs to be the case that belief in success does not breed complacency. It is perfectly compatible with the occurrence of unrealistic optimism that because it makes us think outcomes are achievable, we feel less pressure to take the necessary steps to achieve the outcomes. So conceptually, the link between unrealistic optimism and future outcomes is so underspecified that overconfidence may have the opposite effect from the one the theory specifies, one that is not beneficial. Furthermore, how costly a given course of action is going to be depends on what resources we invest into achieving a goal. This is not something we can read off optimistic predictions regarding the likelihood of achieving that goal.

When we turn to the empirical evidence of what the effects of unrealistic optimism are, we see that unrealistic optimism does in some cases generate complacency. This has most frequently been observed when people look at the link between individuals’ unrealistic optimism and their intentions to undertake precautions to avoid health problems (cf. eg. Kim and Niederpeppe 2013). 

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...