Today's post is written by Cailin O'Connor and James Owen Weatherall. In this post, they present their new book The Misinformation Age: How False Beliefs Spread, published by Yale University Press.
Cailin O’Connor is a philosopher of science and applied mathematician specializing in models of social interaction. She is Associate Professor of Logic and Philosophy of Science and a member of the Institute for Mathematical Behavioral Science at the University of California, Irvine.
James Owen Weatherall is a philosopher of physics and philosopher of science. He is Professor of Logic and Philosophy of Science at the University of California, Irvine, where he is also a member of the Institute for Mathematical Behavioral Science.
Since early 2016, in the lead-up to the U.S. presidential election and the Brexit vote in the UK, there has been a growing appreciation of the role that misinformation and false beliefs have come to play in major political decisions in Western democracies. (What we have in minds are beliefs such as that vaccines cause autism, that anthropogenic climate change is not real, that the UK pays exorbitant fees to the EU that could be readily redirected to domestic programs, or that genetically modified foods are generally harmful.)
One common line of thought on these events is that reasoning biases are the primary explanation for the spread of misinformation and false belief. To give an example, many have pointed out that confirmation bias – the tendency to take up evidence supporting our current beliefs, and ignore evidence disconfirming them – plays an important role in protecting false beliefs from disconfirmation.
In our recent book, The Misinformation Age: How False Beliefs Spread, we focus on another explanation of the persistence and spread of false belief that we think is as important as individual reasoning biases, or even more so. In particular, we look at the role social connections play in the spread of falsehood. In doing so we draw on work, by ourselves and others, in formal social epistemology. This field typically uses mathematical models of human interaction to study questions such as: how do groups of scientists reach consensus? What role does social structure play in the spread of theories? How can industry influence public beliefs about science?
Throughout the book, we use historical cases and modeling results to study how aspects of social interaction influence belief. First, and most obviously, false beliefs spread as a result of our deep dependence on other humans for information. Almost everything we believe we learn from others, rather than directly from our experience of the world. This social spread of information is tremendously useful to us. (Without it we would not have culture or technology!) However, it also creates a channel for falsehood to multiply. Until recently, we all believed the appendix was a useless evolutionary relic. Without social information, we wouldn’t have had that false belief.
Second, given our dependence on others for information, we have to use heuristics in deciding whom to trust. These heuristics are sometimes good ones – such as trusting those who have given us useful information in the past. Sometimes, though, we ground trust on things like shared identity (are we both in the same fraternity?) or shared belief (do we both believe homeopathy works?) As we show, the latter in particular can lead to persistent polarization, even among agents who seek for truth and who can gather evidence about the world. This is because when actors don’t trust those with different beliefs, they ignore individuals who gather the very evidence that might improve their epistemic state.
Third, we sometimes care more about social status and social relationships than about holding true beliefs. Consider, for instance, a parent who is friends with many vaccine skeptics. This person may choose not to vaccinate their child, and to espouse the rhetoric of vaccine skepticism in order to conform to the group, irrespective of their underlying beliefs. Of course, this kind of reckless disregard for evidence can be dangerous. And indeed, we predict that this sort of behavior is more likely when there are relatively few consequences to ignoring evidence. In the case of vaccine skepticism, herd immunity often protects the children of skeptics, which allows social factors to come into play.
Last, those who are interested in influencing and shaping our beliefs are often highly sophisticated about the importance of social ties. Industry has used scientific reputation as a weapon to drive climate change skepticism. Russian agents used social media groups of all stripes – gun rights groups, Black Lives Matter groups, LGBTQ groups, anti-immigrant groups – to create social trust, and strengthen their social influence. They then used this influence to drive polarization in the US, to swing the 2016 election for Trump, and to aid the Brexit campaign.
Looking forward, if we want to fight false belief, we need to take social features into account. In particular, we need to appreciate the role that new structures of social interaction, such as the rise of social media, play in our evolving social epistemic landscape. If social factors are essential to explaining our beliefs, as we argue they are, and if the conditions under which we interact with others change, then we should expect that the heuristics we have developed to cope with other social environments will begin to fail.
These failing heuristics are partly responsible for the recent effectiveness of pernicious social media influencers and propagandists. We must anticipate that politically and economically motivated actors will continue to use social ties to spread misinformation, and that they will continue to develop new and more effective methods for doing so. To combat these methods will require time, effort, and money on the part of social media companies and the government.