Skip to main content

Biased Belief in the Bayesian Brain

Today’s post comes from Ben Tappin, PhD candidate in the Morality and Beliefs Lab at Royal Holloway, University of London, and Stephen Gadsby, PhD Candidate in the Philosophy and Cognition Lab, Monash University, who discuss their paper recently published in Consciousness and Cognition, “Biased belief in the Bayesian brain: A deeper look at the evidence”.



Last year Dan Williams published a critique of recently popular hierarchical Bayesian models of delusion, which generated much debate on the pages of Imperfect Cognitions. In a recent article, we examined a particular aspect of Williams’ critique. Specifically, his argument that one cannot explain delusional beliefs as departures from approximate Bayesian inference, because belief formation in the neurotypical (healthy) mind is not Bayesian.

We are sympathetic to this critique. However, in our article we argue that canonical evidence of the phenomena discussed by Williams—in particular, evidence of the backfire effect, confirmation bias and motivated reasoning—does not convincingly demonstrate that neurotypical belief formation is not Bayesian.

The backfire effect describes the phenomenon where people become more confident in a belief after receiving information that contradicts that belief. As pointed out by Williams, this phenomenon is problematic for Bayesian models of belief formation insofar as new information should cause Bayesians to go towards the information in their belief updating, never away from it. (As an aside, this expectation is incorrect, e.g., see here or here).

We reviewed numerous recent studies where conditions for backfire were favourable (according to its theoretical basis), and found that observations of backfire were the rare exception—not the rule. Indeed, the results of these studies showed that by-and-large people updated their beliefs towards the new information, even if it was contrary to their prior beliefs and in a highly emotive domain.

For example, in an investigation comprising more than 10,000 subjects, researchers Wood and Porter (2018) tested for backfire on numerous “hot button” political issues in the United States—such as gun violence, immigration, crime, abortion and race (there were 52 issues in total)—and they found scant evidence of the phenomenon. Many other recent studies have reported similar results. This is not to say that backfire never occurs (we think it does), but rather that current evidence of the phenomenon does not show it to be a standard feature of belief formation. Therefore, this evidence does not convincingly demonstrate that belief formation in the neurotypical mind is not Bayesian.

The scientific literature on confirmation bias and motivated reasoning is large and diverse; unfortunately, too large and too diverse to review exhaustively in our article. We therefore focused on classic evidence of these phenomena.

A classic demonstration of confirmation bias is that people are prone to judge information as more reliable if it confirms vs. contradicts their prior beliefs. Does this convincingly refute Bayesian principles? We are skeptical. Formal models show that this type of confirmation bias can be expected from Bayesians (e.g., see here or here). Such models rely on assumptions, of course, and these assumptions can be—and should be—scrutinized (cf. Bayesian “just-so” stories). However, in the absence of reasons to reject such model assumptions, one is hard-pressed to conclude that confirmation bias (of the type above) convincingly demonstrates that neurotypical belief formation is not Bayesian.

Paradigmatic evidence of motivated reasoning faces a related limitation. Popular study designs randomly assign people with diverse preferences or identities to receive new information, and then ask these people to evaluate the information. A common result is that people rate information consistent with their preferences and identities as more reliable than inconsistent information that is otherwise identical. Because peoples’ preferences and identities co-vary with a wide range of third variables—not least, their prior beliefs and lived experiences—the results of these studies are polluted by the confirmation bias described in the preceding paragraph (and, therefore, they too are not a convincing refutation of Bayesian principles).

Study designs that purport to rule out this confounding influence of prior beliefs provide seemingly mixed evidence of motivated reasoning, and/or have been interpreted as support for a model of motivated reasoning whose key assumption is that people condition their evaluation of new information on its perceived uncertainty. This assumption seems consistent with core Bayesian principles.

We are open to the idea that neurotypical belief formation is not Bayesian. Indeed, we agree with Williams and others that there are compelling reasons to think that it is not so. We just believe that classic evidence of the backfire effect, confirmation bias and motivated reasoning is not one of these reasons.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...