Skip to main content

Questioning Optimism


I'm Adam Harris and I'm an experimental psychologist from University College London.

I am perhaps an unusual contributor to the Imperfect Cognitions blog as I have argued that cognitions might seem imperfect because of imperfections in prevalent methodologies, predominantly arising from a failure to appreciate the importance of understanding the appropriate normative basis of a task. Specifically, my work has suggested that the assumed ubiquity of optimism across our species is based on questionable evidence.

A prominent example of this work is presented in a paper I wrote with Ulrike Hahn (published in the Psychological Review) in which we demonstrated, through simulation, that rational agents could be labelled as optimistic on the prevalent, comparative method of testing unrealistic optimism. On this method, participants respond to the question "Compared with the average student of your age and sex, how likely are you to..." where future life events are inserted for the ellipsis.

Responses are typically provided on a -3 (much less likely than the average) to +3 (much more likely) scale, where a response of zero represents 'about the same as average'. The logic of the test rests upon the recognition that if participants are accurately reporting their chances, then the average of their responses should be the average. Consequently, any deviation from zero is taken as indicative of a systematic bias. The oft-observed result that average responses on this scale to negative events are significantly negative is taken as evidence that, on average, the members of the group underestimate their relative chances. Because we do not wish to experience negative events, such a result is taken as evidence of optimism.

In Harris and Hahn (2011), we demonstrated that 3 statistical artifacts could generate the oft-observed pattern of results from rational Bayesian agents who are, by definition, unbiased. Such a result raises questions over the interpretation of the same pattern of results observed in human participants. There is no longer any evidence for bias on these tests if the pattern of results are consistent with those of rational agents. Essentially, these tests fail the major pre-requisite for a satisfactory test of bias: Unbiased agents appear biased!  In ongoing (as yet unpublished) research, we have failed to identify any evidence for optimism after controlling for these confounding artifacts (my website will be updated when these results are published).

In addition to specifically raising concerns over the understanding of comparative unrealistic optimism, this work highlights, generally, the importance of understanding what participants’ responses represent and the appropriate normative standard for those responses. In unrealistic optimism research, participants’ responses represent their understanding of their own risk and the average person’s risk. Normatively, their own risk includes their estimate of the base rate as well as any individuating information they possess. This insight is a critical consideration when evaluating conclusions from any measure designed to assess bias in risk estimates about real-world events (see also, Harris et al., 2013)

Fortunately, there is also an initial, easy, check for an optimism bias that cannot be accounted for on statistical grounds. The implications of any statistical account in terms of researching optimism are opposite for events of opposite valence. If, for example, a statistical account predicts lower responses for negative events, it will also predict lower responses for comparable positive events. Because of the reversed desirability of positive and negative events, however, the same direction of effect that constitutes optimism for one valence would constitute pessimism for the opposing valence.

Thus, the inclusion of both positive and negative events can serve as a first-stage litmus test to identify a possible confounding influence of statistical artifacts. I therefore recommend that researchers routinely include both positive and negative events in their tests of optimism. In my own work, all such tests to date have observed the same direction of effect in each valence. This constitutes seeming optimism in one valence and pessimism in the other, thus failing to provide the conclusive evidence required for optimism.




Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...