Skip to main content

The Misinformation Age: how false beliefs spread

       
        


Today's post is written by Cailin O'Connor and James Owen Weatherall. In this post, they present their new book The Misinformation Age: How False Beliefs Spread, published by Yale University Press.

Cailin O’Connor is a philosopher of science and applied mathematician specializing in models of social interaction. She is Associate Professor of Logic and Philosophy of Science and a member of the Institute for Mathematical Behavioral Science at the University of California, Irvine. 

James Owen Weatherall is a philosopher of physics and philosopher of science. He is Professor of Logic and Philosophy of Science at the University of California, Irvine, where he is also a member of the Institute for Mathematical Behavioral Science.   

Risultati immagini per The misinformation age: how false beliefs spread


Since early 2016, in the lead-up to the U.S. presidential election and the Brexit vote in the UK, there has been a growing appreciation of the role that misinformation and false beliefs have come to play in major political decisions in Western democracies. (What we have in minds are beliefs such as that vaccines cause autism, that anthropogenic climate change is not real, that the UK pays exorbitant fees to the EU that could be readily redirected to domestic programs, or that genetically modified foods are generally harmful.)

One common line of thought on these events is that reasoning biases are the primary explanation for the spread of misinformation and false belief. To give an example, many have pointed out that confirmation bias – the tendency to take up evidence supporting our current beliefs, and ignore evidence disconfirming them – plays an important role in protecting false beliefs from disconfirmation.

In our recent book, The Misinformation Age: How False Beliefs Spread, we focus on another explanation of the persistence and spread of false belief that we think is as important as individual reasoning biases, or even more so. In particular, we look at the role social connections play in the spread of falsehood. In doing so we draw on work, by ourselves and others, in formal social epistemology. This field typically uses mathematical models of human interaction to study questions such as: how do groups of scientists reach consensus? What role does social structure play in the spread of theories? How can industry influence public beliefs about science?

Throughout the book, we use historical cases and modeling results to study how aspects of social interaction influence belief. First, and most obviously, false beliefs spread as a result of our deep dependence on other humans for information. Almost everything we believe we learn from others, rather than directly from our experience of the world. This social spread of information is tremendously useful to us. (Without it we would not have culture or technology!) However, it also creates a channel for falsehood to multiply. Until recently, we all believed the appendix was a useless evolutionary relic. Without social information, we wouldn’t have had that false belief.

Second, given our dependence on others for information, we have to use heuristics in deciding whom to trust. These heuristics are sometimes good ones – such as trusting those who have given us useful information in the past. Sometimes, though, we ground trust on things like shared identity (are we both in the same fraternity?) or shared belief (do we both believe homeopathy works?) As we show, the latter in particular can lead to persistent polarization, even among agents who seek for truth and who can gather evidence about the world. This is because when actors don’t trust those with different beliefs, they ignore individuals who gather the very evidence that might improve their epistemic state.

Third, we sometimes care more about social status and social relationships than about holding true beliefs. Consider, for instance, a parent who is friends with many vaccine skeptics. This person may choose not to vaccinate their child, and to espouse the rhetoric of vaccine skepticism in order to conform to the group, irrespective of their underlying beliefs. Of course, this kind of reckless disregard for evidence can be dangerous. And indeed, we predict that this sort of behavior is more likely when there are relatively few consequences to ignoring evidence. In the case of vaccine skepticism, herd immunity often protects the children of skeptics, which allows social factors to come into play.

Last, those who are interested in influencing and shaping our beliefs are often highly sophisticated about the importance of social ties. Industry has used scientific reputation as a weapon to drive climate change skepticism. Russian agents used social media groups of all stripes – gun rights groups, Black Lives Matter groups, LGBTQ groups, anti-immigrant groups – to create social trust, and strengthen their social influence. They then used this influence to drive polarization in the US, to swing the 2016 election for Trump, and to aid the Brexit campaign.

Looking forward, if we want to fight false belief, we need to take social features into account. In particular, we need to appreciate the role that new structures of social interaction, such as the rise of social media, play in our evolving social epistemic landscape. If social factors are essential to explaining our beliefs, as we argue they are, and if the conditions under which we interact with others change, then we should expect that the heuristics we have developed to cope with other social environments will begin to fail.

These failing heuristics are partly responsible for the recent effectiveness of pernicious social media influencers and propagandists. We must anticipate that politically and economically motivated actors will continue to use social ties to spread misinformation, and that they will continue to develop new and more effective methods for doing so. To combat these methods will require time, effort, and money on the part of social media companies and the government.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph