Skip to main content

Social Approaches to Delusions (4): Collectively Jumping to Conclusions

Justin Sulik

Here is the fourth post in the series on social approaches to delusions, after the posts by Miyazono, Williams and Montagnese, and Wilkinson. In today's post, Justin Sulik, Charles Jefferson, and Ryan McKay discuss their new paper, “Collectively Jumping to Conclusions: Social Information Amplifies the Tendency to Gather Insufficient Data”.

Justin Sulik is a postdoctoral researcher in the Cognition, Values and Behavior group at Ludwig Maximilian University of Munich. Charles Efferson is Professor in the Faculty of Business and Economics at the University of Lausanne. Ryan McKay is Professor of Psychology at Royal Holloway, University of London.

 

Charles Efferson


Human beings are inveterate misbelievers. At the individual level, our propensity to false beliefs about our prowess and prospects can be costly and dangerous: promoting harmful behaviours like unsafe driving, smoking and overspending. Spreading and amplifying in large groups, however, such misbeliefs – we might call them “collective delusions” – can have catastrophic consequences. Widespread suspicions that vaccines cause more harm than good, that coronavirus is just another flu that will pass, or that climate change is a hoax, decrease people’s intentions to vaccinate, to self-isolate, or to reduce their carbon footprint, triggering or exacerbating global public health emergencies and environmental disasters.

How can we explain collective delusions? Psychological explanations typically appeal to two main causal pathways (see a and b in the figure). The first is a matter of individual psychology: some of us have cognitive biases or deficits that render us, in isolation, more prone to misbeliefs. The tendency to “jump to conclusions”, for instance – forming beliefs on minimal evidence – is thought to play a role in the formation of clinical delusions. 





The second pathway is social: People are influenced by the beliefs of those around them, particularly by those with status and prestige. For instance, when a high-status individual like President Trump claims that vaccinations are causing an autism epidemic; that the coronavirus pandemic is totally under control; that President Obama was born in Kenya, or that global warming is a hoax created by the Chinese, these beliefs are likely to increase among the population.

In a new pre-registered study, we test whether a third causal pathway exists (pathway c in the figure), whereby social information affects the cognitive biases themselves, in addition to the beliefs those biases occasion. Can social context explain not just what people learn but also how they learn?

We investigate the “jumping to conclusions” bias specifically. To explore whether a social context can amplify this individual learning bias, we embedded a probabilistic reasoning task (the well-known “Beads Task”) in a social transmission-chain procedure. We recruited 2000 participants and randomly assigned them to one of 100 chains or groups (20 participants per group). Within each group, an initial participant (assigned to Position 1 within their group) undertook the Beads Task. They had to draw coloured beads from an urn, filled with beads of two colours, in order to discover the colour of the majority of beads in the urn. Each participant could decide how much evidence they gathered (specifically, how many beads they wanted to draw) before making their decision about the majority colour.



Ryan McKay


Subsequently, a second participant (assigned to Position 2 within their group) was given information about how many beads the first participant gathered before they also undertook the Beads Task. Thereafter, the participant in Position 3 was given information about the data-gathering decisions of participants in Positions 1 and 2, and so on. This was the procedure for half of the groups (our social condition). The other half were assigned to an asocial control condition, which was identical except that participants were not given information about the data-gathering decisions of previous participants in their group.

We gave a small monetary reward to participants who correctly guessed the majority colour of the urn, but also made them pay for each bead they drew before making this guess. This meant there was always a rationally optimal amount of evidence to gather. Across the board, our participants jumped to conclusions, drawing fewer beads than they optimally should have done. Crucially, this tendency was amplified when participants could observe the evidence-gathering behaviour of others in their chain (the social condition), relative to those who could not (the asocial control). Effectively, participants in the social condition “caught” this tendency from their forebears, and as a result were more likely to arrive at false beliefs about the majority colour, and to earn less money.

To contextualise this, let’s return to President Trump. Aside from the kinds of beliefs Mr Trump endorses, he has also displayed a certain attitude to evidence. For example, he is famously allergic to reading. According to Trump’s former chief economic adviser, “Trump won’t read anything—not one-page memos, not the brief policy papers, nothing.” Trump himself has acknowledged that he prefers reports to contain “as little as possible.” Now, again, Trump is a prestigious individual and many people follow his lead. But in this case what they might “catch” from him is not a specific belief (e.g., that vaccines cause autism), but a learning strategy – a means by which people acquire beliefs. In particular, they might acquire a disregard for evidence (at least of the written kind).

Consistent with this, our study demonstrates that when individuals see others collecting minimal evidence before making a decision, they too are more inclined to collect minimal evidence. People who form beliefs on the basis of minimal evidence are more likely to be wrong – so, in shifting the focus from the diffusion of false beliefs to the diffusion of suboptimal belief-formation strategies, we have identified a novel mechanism whereby misbeliefs arise and spread.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...