Skip to main content

Cognitive Biases, Error Management Theory, and the Reproducibility of Research Findings


This post is by Miguel A. Vadillo (pictured above), Lecturer in Decision Theory at King's College London. In this post he writes about cognitive biases, error management theory, and the reproducibility of research findings. 

The human mind is the end product of hundreds of thousands of years of relentless natural selection. You would expect that such an exquisite piece of software should be capable of representing reality in an accurate and objective manner. Yet decades of research in cognitive science show that we fall prey to all sorts of cognitive biases and that we systematically distort the information we receive. Is this the best evolution can achieve? A moment’s thought reveals that the final goal of evolution is not to develop organisms with exceptionally accurate representations of the environment, but to design organisms good at surviving and reproducing. And survival is not necessarily about being rational, accurate, or precise. The target goal is actually to avoid making mistakes with fatal consequences, even if the means to achieve this is to bias and distort our perception of reality.

This simple principle is the core idea of error management theory, one of the most interesting frameworks to address systematic biases in perception, judgement, and decision making. From this point of view, our cognitive system is calibrated to avoid making costly errors, even if that comes at the expense of making some trivial errors instead. For instance, a long tradition of research on the illusion of control shows that people tend to overestimate the impact of their own behaviour on significant events. An advocate of error management theory would suggest that falling into this error is perhaps not as costly as the opposite mistake: Failing to detect that one has control over some relevant event. Consequently, evolution has endowed us with a predisposition to overestimate control.

In a way, science is set of tools specifically designed to overcome this bias: Research methods and statistics were conceived to counteract our tendency to see patterns where there is only chance and to ignore alternative explanations of the events we observe. Perhaps these biases were useful to survive in the Savannah, but they are definitely not your friends when you want to discover how nature works. Unfortunately, these refined methods are unlikely to work perfectly if the key asymmetry that gave rise to the biases remains intact. Whenever the evidence is ambiguous, we will always be tempted to interpret it in the most favourable way, avoiding costly errors.

Imagine that you are a young scientist trying to find a pattern in your freshly collected data set. You can think of two ways to analyse your data, both of them equally defensible. Following one route, you get a p-value lower than .05. In the alternative analysis your result is not significant. Maybe you have discovered something or maybe you have not. One of these interpretations will allow you to publish a paper in a prestigious journal and keep your position in academia. If you decide to believe the opposite, you have just wasted several months of data collection in exchange for nothing and you eventually may have problems to make ends meet. None of these beliefs is perfectly sure. But, understandably, if you have to decide what error to make, you will prefer it to be a Type I error.

More than ten years ago (2005), John Ioannidis concluded that most published research findings must be false. Given the asymmetric costs of Type I and Type II errors for researchers, such a terrible bias in the scientific literature is exactly what you would expect to find according to error management theory. The current debate about the reproducibility of psychological science and other disciplines has focused extensively in developing new methods and developing a new culture of open science. However, even if these new practices are badly needed, they are unlikely to put an end to biases in the scientific literature. There will always be contradictory findings, experiments with inconsistent results and analyses leading to opposite conclusions. Biases will persist as long as researchers find some interpretations of the data more useful than others, even if they are inaccurate or overly wrong. Error management theory predicts that scientific ‘illusions’ are the natural consequence of the reward structure imposed by scientific institutions. Without a radical change in the distribution of incentives, all the other measures can only have a limited impact on the quality of scientific research.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...