Skip to main content

Unintended Consequences

The Forum (BBC World Service)
We all recall situations where our choice did not bring about the outcome we expected. This can happen to a couple planning a family, scientists predicting the result of a new experiment, politicians implementing a new policy to overcome a problem.

For each of these individual situations there may be specific reasons that explain why the outcome was unexpected, but I am interested in the general question, Why is it that our actions have unintended consequences? We can think about the issue of unintended consequences in terms of the capacities that would make it more likely for us as human agents to reliably predict the events our actions give rise to when we make decisions, reflecting on the wealth of empirical data coming from the psychological literature.

Being able to predict the outcomes of our actions would require good memory, sound reasoning, and effective deliberation. So, we could remember what consequences our previous actions have had, infer from our past experiences what consequences our future actions will be likely to have, and use this information when deciding on a course of action. Unfortunately, the recent psychological evidence suggests that the prospects for good memory, sound reasoning, and effective deliberation, even for those agents who are healthy and well-educated, are poor.

MEMORY. As many of the contributors to this blog have argued, when we remember we do not retrieve a file from a folder in a filing cabinet, and thus access all the neatly recorded details of a past event, but attempt to complete a puzzle with several missing pieces. We build a largely coherent story out of disconnected bits of information, information that may come from direct experience or from the testimony of others. In order to complete the picture, we need to fill gaps with plausible guesses about how things went. Thus, our memories are vulnerable to distortions and fabrications. For instance, we let our present beliefs and feelings colour the way we remember the past. It has been shown that we describe our past political convictions and ideological stances as closer to the ones we currently have than they were at the time. We also tend to put a positive spin on facts that relate to us, and this tendency is accentuated with aging. We present ourselves as more coherent and overall better than we actually are. In predicting the consequences of our action, then, we may be more optimistic about positive outcomes than the evidence suggests.

REASONING. Human reasoning is systematically flawed, in that we make basic mistakes when we think about how probable events are and in what conditions a certain statement would turn out to be false. Moreover, we tend to interpret events in a biased way. For instance, we tend to believe that we are responsible for positive events and we consider ourselves praiseworthy for successful outcomes, even when these were determined by the surrounding environment and actually out of our control. But we give responsibility for negative events to other people or to external circumstances, and avoid considering ourselves blameworthy for failures. The Good Samaritan study showed for instance that people are much more likely to help a stranger in distress if they are not in a hurry, independent of their character traits. Our actions tend to be more influenced by the external circumstances than we are happy to admit. This of course matters to decision making, as we tend to systematically underestimate the importance of the surrounding environment and overestimate the stability of our character traits and their efficacy in causing how we behave.

DELIBERATION. What further undermines the capacity to predict the consequences of our actions is the simple fact that we very rarely think (in the sense of “reflect” or “deliberate”) before making decisions. Most of our decisions are fast and almost automated, and only some of the time we sit down, use pen and paper, and consider explicitly the advantages and disadvantages of several possible outcomes before deciding or acting (maybe we do so when we compare different mortgage deals, or when we ponder a possible change of career, because these are obviously decisions with long-term effects, but we do not do that for most of our decisions, not on an everyday basis). This leads to two questions.

One is, what are most of our decisions based on, if not on reflection? And the other is, why do we seem to be able to provide reasons for our decisions when we are asked to do so? The answer to the first question is that the mechanisms responsible for our fast decisions are unknown to us. We may be moved by cues in the environment (as when we select socks positioned on our right side as opposed to sock positioned on our left side, even when the socks are identical items, as in the classic experiment by Richard Nisbett and Tim Wilson). Or we may make judgement instinctively, driven by evolutionarily helpful reactions and socially conditioned responses (as when we are quick to condemn incest between siblings as morally objectionable because we have a brute reaction of disgust towards it, as in another famous series of experiments by Jonathan Haidt). The interesting thing is that when we are asked to justify our consumer choices and moral judgements we rarely admit our ignorance about how they came about. Most often we make up an explanation that sounds plausible even if it does not accurately represent the way we came to the choice or the judgement. (This made-up explanation is called “confabulation” and it involves no awareness of the mismatch between cause of action and explanation and no intention to deceive others).

So, in the case of consumer choice after selecting socks in a shop, and in the case of moral judgement after evaluating an incest case, the reasons we provide for our attitudes do not match with the particulars of the case. We provide a justification that does not explain how the choices and judgements were made and that clashes with reality. People asked why they chose the socks they did choose mentioned that the chosen socks were softer and had brighter colours whereas they were identical to the rejected socks, and differed only in their position. People asked why the incest case was morally objectionable mentioned the risks of psychological suffering and an unwanted pregnancy down the line, when the specific case described by the experimenter specified that the act did not have those consequences for the siblings involved.

Irrationality
by Lisa Bortolotti
In a more recent series of studies by Petter Johansson and Lars Hall, the phenomenon of choice blindness has been observed, and this is even more unsettling. Here people are asked to make a choice between two items A and B. They select A over B, and then they are asked why they chose B (so why we chose the item they had actually rejected). Instead of recognising the experimenter’s manipulation, they provide a justification for the alleged choice, the one they had not made, showing how flexible (or should I say ‘flimsy’?) attitudes are. Items can be strangers’ faces (Which one is the most attractive?), political parties or specific policies (Which one do you support?), and a number of consumer choices too, showing that we tend to be blind even to choices that should be very important to us, self-defining, such as whether we have progressive or conservative views. Obviously, choice blindness has considerable implications for the phenomenon of unintended consequences, as agents may experience consequences of their actions that they had not intended, and describe them as “intended” by telling a story about how those consequences were expected all along.

Now the studies I mentioned may be taken to sketch a very bleak view of human agency and rationality: we are creative with our memories, we are flawed in our reasoning, and we are blind to our choices. But I do think that there are some reasons for optimism in spite of these conclusions, and this is what my new project, PERFECT, focuses on. First, we as human agents have the capacity to identify our weaknesses and limitations and this is the first step towards taking measures to improve. If I know I tend to ascribe positive but not negative events to myself, and if I know I am more likely to choose items on my right when making a choice, with some effort and some good will I can compensate for those tendencies.

Second, biases and confabulations can have some positive function that it would be unwise to neglect. The capacity to reconstruct the past when the details of it are no longer available and to provide reasons when the causes of our actions and decisions are mysterious to us help us give some direction to our lives and impose some order and coherence to random facts about ourselves. It also increases our confidence and impacts positively on social interactions and on performance.

Finally, what may be false about the ‘past me’ can become true of the ‘future me’. In the long run, ideas about ourselves that are illusory or not supported by evidence can inform our more reflective decisions and shape who we are, giving us some of control over our actions. We can decide on the basis of the type of person we want to be, even if our understanding of the person we are is based on a misconception or an illusion. If I see myself as someone who loves bright-coloured socks and anti-discrimination policies, when I do reflect on my choices in the future I will attempt to act and choose in a way that is coherent with that picture of myself, even if those facts did not determine my past actions and choices.

My new book, Irrationality, deals with some of these issues in more details (see chapters 3 and 4 in particular). I also talked about some of these issues on an episode of The Forum (BBC World Service), aired on Tuesday 9th December 2014, and in an interview by Philosophy Bites, published on 19th March 2015.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph