Skip to main content

Causal Illusions and the Illusion of Control: Interview with Helena Matute

In this post I interview Helena Matute (picture below), who is Professor of Psychology and director of the Experimental Psychology laboratory at the University of Deusto in Bilbao, Spain.


AJ: You are a leading expert on causal illusions. Could you explain what causal illusions and illusions of control are?

HM: A causal illusion (or illusion of causality) occurs when people perceive a causal relationship between two events that are actually unrelated. The illusion of control is just a special type of causal illusion in which the potential cause is our own behavior. That is, a causal illusion is often called an illusion of control when people believe that their own behavior is the cause of the unrelated effect, or, in other words, when they believe that they have control over uncontrollable events in their environment.

Illusions of causality and of control occur in most people, particularly under certain conditions. For example, when the potential cause and the potential effect occur frequently and in close temporal contiguity, most people develop the illusion that they are causally related. It becomes very difficult, if not impossible, to detect with our bare eyes that they are not. Indeed, just as we need a measurement tape to counteract optical illusions when we want to measure the size of an apartment, the use of a specialized tool (in this case the scientific method) is absolutely necessary when we need to make sure that a relationship (say between taking a certain pill and some health benefit) is causal rather than incidental.

For instance, imagine that someone tells people to take drug A when they suffer from syndrome X “because 80% of 100 patients who took drug A recovered from syndrome X”. Of course this information seems to suggest that there is a causal relationship between drug A and recovery, and if it comes from people we know and trust, we could develop the illusion. Note, however, that there is an important piece of information that we are not given in this example. Imagine that it is also true that “80% of 100 patients who took a sugar pill instead of drug A recovered from syndrome X just as well and just as fast”. Ummm. This new information is now telling us that drug A is totally inefficient, it is just like taking candy. That is, we humans are not prepared to detect these illusions unless we run a controlled experiment, or a controlled clinical trial, so that we get both the information on what happens when we take the drug and what happens when we do not take it.

The problem is that in our daily life, most of us tend to be content with the information that 80% of people taking the pill felt better afterwards, and therefore we assume a causal relationship and take the pill without even asking what the recovery rate for those not taking the pill was. This illusion of causality can have serious consequences (as when people refuse to go to the doctor just because they feel they can intuitively know what works for them). Without the aid of a careful and controlled methodology we humans are victims of causal illusions very often.


AJ: Based on your research, do you think the illusion of control is a purely cognitive bias or are there motivational causes as well?

HM: There are many experiments in the literature that seem to suggest that there are motivational bases for the illusion of control. For example, when people are more personally involved in trying to obtain the uncontrollable event, or when they are not depressed, they tend to develop stronger illusions. The illusion of control has often been regarded as a motivational mechanism to protect self-esteem. However, this does not apply well to the more general case of the illusion of causality where the potential cause is an external event which has nothing to do with our behavior and self-esteem.

Moreover, even in the case of the illusion of control (i.e., when the potential cause is our own behavior), what we have observed in our laboratory is that the methodology used in many of the published experiments does not really allow a discrimination between the motivational and the cognitive explanations. For instance, we have shown that when people are personally involved in trying to obtain an outcome (either because they are instructed to, or because their performance is being monitored, or for various other reasons that we have reported in several research articles), people attempt to obtain the outcomes with higher frequency, that is, they act more frequently than those that are not personally involved, or are depressed, and are thus more passive.

In all of those cases we have shown that a variable that often mediates the relationship between motivation to control and illusion of control is the probability of the potential cause, in this case, the probability of acting. That is, it is well known that when people are strongly motivated to obtain something they will tend to act more than when they are not motivated. They also will tend to be more active when they are not depressed. And it is also well known that when the potential cause occurs frequently, the illusion is stronger.

We show that in cases in which people act frequently, they experience many coincidences between their behavior and the outcome. Those coincidences will in turn give rise to the illusion of control. More specifically, we have observed that in many of the traditional experiments supporting the motivational interpretation, the higher probability of acting was the variable underlying such a result.

Therefore, I am inclined to think that most of the evidence available so far does indeed support a cognitive view according to which people expose themselves to biased information as a function of whether they act with high or low probability. As long as they expose themselves to biased information, there is no way they can detect that the outcome could have occurred equally often if they had not acted to obtain it. This being said, I do believe that there is probably a motivational component involved as well in the illusion of control, though current research has not yet shown it unambiguously. Designing experiments where this component can be unambiguously discriminated from the cognitive and behavioral component is an important avenue for research in the years ahead.

AJ:What do you take the relationship between causal illusions and the optimism bias to be? Do the former cause the latter?


HM: Does optimism bias cause an illusion of control or is the illusion that we are controlling our environment what enhances our optimism? I do not know… I would speculate that one potential route could be that the optimism bias makes us feel that we can control our environment and therefore we try… And if we try and act frequently, then if the event we are looking for is occurring independently of our behavior but frequently; then, the more optimistic we are and the more we act, the higher the chances that the occurrence of the desired event will coincide with our attempts to obtain it. This will in turn reinforce our illusion of control and our motivation to persevere. In this way, the illusion of control could be an efficient safeguard against passivity and inactivity, which would possibly have lethal consequences in evolutionary terms.

Of course the alternative route is also possible and could start by our acting just by chance one day and obtaining a desired event, which would increase our motivation to act, so our chances to obtain the desired event again in the near future will also increase, as will our illusion of control and hence our optimism. Which one came first? I cannot tell, but the two of them seem to be highly interrelated

AJ: In your work, you point to negative consequences of seeing cause-effect relations where there are none, as for example when this leads people to uncritically believing in unscientific forms of medicine such as homeopathy. Do you think causal illusions have predominantly negative effects? Can they also be ‘positive illusions’, to borrow Taylor’s famous phrase?

HM: Oh, they surely have positive consequences as well. Otherwise they would have not been retained to this day. This is a trend that evolution has favored, which means that those individuals who in the early times developed the illusion that they were controlling their environment had an evolutionary advantage. They were probably more active and more persistent in their attempts to survive and so were more successful.

Those who believed that they had no control over their environment quite possibly reduced their motivation to act, and their actual chances to survive, as in learned helplessness phenomena. When Overmier, Seligman and Maier first described learned helplessness effects in 1967 they found their dogs had lost their motivation to initiate voluntary responses after exposure to uncontrollable events. If believing (or realizing) that you have no control leads you to feel depressed, passive, and unresponsive, then a lack of causal illusion in those cases can be quite damaging. So, yes, causal illusions do have a positive side in many real life situations, but today there are also many situations in which illusory beliefs can be quite damaging. For instance, it might have been positive to believe that an innocuous herb could heal you in the times when no scientific medicine existed. However, maintaining such beliefs at present may have disastrous consequences.

To conclude, it is important to keep in mind that we cannot spend our life looking for biases, illusions and errors all the time, so we should be aware that we will suffer from some degree of illusory perception of causality from time to time. Thus, what I believe that what we should do is (1) be aware that we are all victims of these illusions, so we will be ready to detect them when they occur, and (2) recognize those situations in which causal illusions would have negative consequences for us and for those we love, as it is in those cases that we need to be particularly vigilant to detect and combat them. In other cases, just forget or make fun of your superstitions and illusions.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...