Monday, 6 June 2016

Is Unrealistic Optimism an Adaptation?

We humans have a well-established tendency to be overly optimistic about our future and to think that the risk of bad things happening to us is lower than is likely, while we think that the chance of good things happening to us is higher than is likely. Why is this case? What drives these positive illusions?

There are two possible ways in which we can understand and try to answer these questions. We can either look at the causal mechanisms underlying unrealistic optimism, or we can ask why this feature has survived and spread through human populations. Evolutionary psychology aims to answer the second question, in essence claiming that we are unrealistically optimistic because this has had benefits in terms of survival and reproduction.

So why should it be adaptive to have systematically skewed beliefs, which are frequently unwarranted and/or false?  Martie Haselton and Daniel Nettle have argued that unrealistic optimism is a form of error management, it helps us make the least costly error in situations of decision making under uncertainty.

Error management theory holds that when making decisions in contexts of uncertainty, we should err on the side of making low cost, high benefit errors and that this strategy can at times outperform unbiased decision making (cf. Haselton and Nettle 2006). This is nicely illustrated by the now well-known fire alarm analogy. If a fire alarm is set at a slightly too sensitive setting, we will have the inconvenience of having to turn it off when the toast has burnt every once in a while. If it is set at a more insensitive setting, we run the risk of burning alive in our beds because the alarm was activated too late. The over-sensitivity of the fire alarm brings only low costs (annoyance), but high rewards (reducing risk of death). 

This model of the selectional benefits of unrealistic optimism is committed to the claim that we should only be unrealistically optimistic in situations where potential payoffs for action are high and costs of failed action are low. If individuals were unrealistically optimistic in high cost/low benefit scenarios, this would decrease their chances of survival and reproduction. Does unrealistic optimism conform to this pattern?

I would like to argue that there are conceptual issues which make it impossible to establish whether we are faced with a low cost/high benefit scenario in many cases where we display unrealistic optimism and that in as far as we have empirical evidence, much of it speaks against the error management hypothesis.

According to error management theory, unrealistic optimism is beneficial because it leads to the belief that a desirable effect is achievable or an undesirable effect is avoidable, and this makes us more likely to take steps to achieve it or avoid it. However, here’s the rub: in order for this to work, it needs to be the case that belief in success does not breed complacency. It is perfectly compatible with the occurrence of unrealistic optimism that because it makes us think outcomes are achievable, we feel less pressure to take the necessary steps to achieve the outcomes. So conceptually, the link between unrealistic optimism and future outcomes is so underspecified that overconfidence may have the opposite effect from the one the theory specifies, one that is not beneficial. Furthermore, how costly a given course of action is going to be depends on what resources we invest into achieving a goal. This is not something we can read off optimistic predictions regarding the likelihood of achieving that goal.

When we turn to the empirical evidence of what the effects of unrealistic optimism are, we see that unrealistic optimism does in some cases generate complacency. This has most frequently been observed when people look at the link between individuals’ unrealistic optimism and their intentions to undertake precautions to avoid health problems (cf. eg. Kim and Niederpeppe 2013). 


  1. Hi Anneli,

    Very interesting! Here are some comments & questions.

    (1) I guess your claim "in order for this [EM theory] to work, it needs to be the case that belief in success does not breed complacency" is too strong. The theory can permit such a possibility, at least to some degree. My understanding is that the theory only says that, overall, having false success beliefs, even if they can lead to complacency sometimes, is systematically less costly than having false failure beliefs.

    (2) The empirical evidence (i.e., ignoring medical precautions) is interesting. But Error-management bias was, according to the theory, shaped so many years ago where modern medical diagnosis and intervention were not available. Ignoring medical precautions is certainly costly today, but the cost didn't exist when the bias was shaped. The cost seems to be the product of the mismatch between ancient society and modern society.

    What do you think?


    1. Hi Kengo, Thanks for responses, very interesting. Re 1)Yes, that's right. It would be sufficient to show that on average, benefits outweigh costs. However, the evidence that I have been able to find does not show that. Your second point is very interesting and in the worked out version of the paper, I address it. In the current formulation of the error management approach to unrealistic optimism, it is supposed to cover across all kinds of domains and be very open in the kinds of things we are optimistic about. I agree that one could rework the theory to have a more narrow scope and only apply to likely problems in the environment of adaptation. However, one would then still need some empirical of evidence that we get optimism in the right kinds of circumstances (high benefit/low cost). I haven't found any so far.

    2. "I agree that one could rework the theory to have a more narrow scope and only apply to likely problems in the environment of adaptation." This is slightly different from what I suggested. The bias might be a general one, as you point out. My suggestion was that the cost/benefit structure is different in ancient environment and current environment. Thus, the same bias, which was beneficial in the past, might not be very beneficial now. (The idea is sometimes called "mismatch hypothesis" in evolutionary psychology.) This might explain the evidence about ignoring medical precautions. After all, ignoring medical precautions is costly only when reliable medical diagnosis and effective intervention are available. They did not exist in the past.

      Are you writing a paper on it? I really want to read it!


    3. Hi, sorry, I didn't express myself clearly enough. The idea was similar to the one you suggest: The bias is general, but historically, this did not matter, because the relevant scenarios which led to it being passed on were different (probably linked to more immediate, short term decisions and outcomes). I'll e-mail you about the paper.


Comments are moderated.