Today’s post comes from Ben Tappin, PhD candidate in the Morality and Beliefs Lab at Royal Holloway, University of London, and Stephen Gadsby, PhD Candidate in the Philosophy and Cognition Lab, Monash University, who discuss their paper recently published in Consciousness and Cognition, “Biased belief in the Bayesian brain: A deeper look at the evidence”.
Last year Dan Williams published a critique of recently popular hierarchical Bayesian models of delusion, which generated much debate on the pages of Imperfect Cognitions. In a recent article, we examined a particular aspect of Williams’ critique. Specifically, his argument that one cannot explain delusional beliefs as departures from approximate Bayesian inference, because belief formation in the neurotypical (healthy) mind is not Bayesian.
We are sympathetic to this critique. However, in our article we argue that canonical evidence of the phenomena discussed by Williams—in particular, evidence of the backfire effect, confirmation bias and motivated reasoning—does not convincingly demonstrate that neurotypical belief formation is not Bayesian.
The backfire effect describes the phenomenon where people become more confident in a belief after receiving information that contradicts that belief. As pointed out by Williams, this phenomenon is problematic for Bayesian models of belief formation insofar as new information should cause Bayesians to go towards the information in their belief updating, never away from it. (As an aside, this expectation is incorrect, e.g., see here or here).
We reviewed numerous recent studies where conditions for backfire were favourable (according to its theoretical basis), and found that observations of backfire were the rare exception—not the rule. Indeed, the results of these studies showed that by-and-large people updated their beliefs towards the new information, even if it was contrary to their prior beliefs and in a highly emotive domain.
For example, in an investigation comprising more than 10,000 subjects, researchers Wood and Porter (2018) tested for backfire on numerous “hot button” political issues in the United States—such as gun violence, immigration, crime, abortion and race (there were 52 issues in total)—and they found scant evidence of the phenomenon. Many other recent studies have reported similar results. This is not to say that backfire never occurs (we think it does), but rather that current evidence of the phenomenon does not show it to be a standard feature of belief formation. Therefore, this evidence does not convincingly demonstrate that belief formation in the neurotypical mind is not Bayesian.
The scientific literature on confirmation bias and motivated reasoning is large and diverse; unfortunately, too large and too diverse to review exhaustively in our article. We therefore focused on classic evidence of these phenomena.
A classic demonstration of confirmation bias is that people are prone to judge information as more reliable if it confirms vs. contradicts their prior beliefs. Does this convincingly refute Bayesian principles? We are skeptical. Formal models show that this type of confirmation bias can be expected from Bayesians (e.g., see here or here). Such models rely on assumptions, of course, and these assumptions can be—and should be—scrutinized (cf. Bayesian “just-so” stories). However, in the absence of reasons to reject such model assumptions, one is hard-pressed to conclude that confirmation bias (of the type above) convincingly demonstrates that neurotypical belief formation is not Bayesian.
Paradigmatic evidence of motivated reasoning faces a related limitation. Popular study designs randomly assign people with diverse preferences or identities to receive new information, and then ask these people to evaluate the information. A common result is that people rate information consistent with their preferences and identities as more reliable than inconsistent information that is otherwise identical. Because peoples’ preferences and identities co-vary with a wide range of third variables—not least, their prior beliefs and lived experiences—the results of these studies are polluted by the confirmation bias described in the preceding paragraph (and, therefore, they too are not a convincing refutation of Bayesian principles).
Study designs that purport to rule out this confounding influence of prior beliefs provide seemingly mixed evidence of motivated reasoning, and/or have been interpreted as support for a model of motivated reasoning whose key assumption is that people condition their evaluation of new information on its perceived uncertainty. This assumption seems consistent with core Bayesian principles.
We are open to the idea that neurotypical belief formation is not Bayesian. Indeed, we agree with Williams and others that there are compelling reasons to think that it is not so. We just believe that classic evidence of the backfire effect, confirmation bias and motivated reasoning is not one of these reasons.
The scientific literature on confirmation bias and motivated reasoning is large and diverse; unfortunately, too large and too diverse to review exhaustively in our article. We therefore focused on classic evidence of these phenomena.
A classic demonstration of confirmation bias is that people are prone to judge information as more reliable if it confirms vs. contradicts their prior beliefs. Does this convincingly refute Bayesian principles? We are skeptical. Formal models show that this type of confirmation bias can be expected from Bayesians (e.g., see here or here). Such models rely on assumptions, of course, and these assumptions can be—and should be—scrutinized (cf. Bayesian “just-so” stories). However, in the absence of reasons to reject such model assumptions, one is hard-pressed to conclude that confirmation bias (of the type above) convincingly demonstrates that neurotypical belief formation is not Bayesian.
Paradigmatic evidence of motivated reasoning faces a related limitation. Popular study designs randomly assign people with diverse preferences or identities to receive new information, and then ask these people to evaluate the information. A common result is that people rate information consistent with their preferences and identities as more reliable than inconsistent information that is otherwise identical. Because peoples’ preferences and identities co-vary with a wide range of third variables—not least, their prior beliefs and lived experiences—the results of these studies are polluted by the confirmation bias described in the preceding paragraph (and, therefore, they too are not a convincing refutation of Bayesian principles).
Study designs that purport to rule out this confounding influence of prior beliefs provide seemingly mixed evidence of motivated reasoning, and/or have been interpreted as support for a model of motivated reasoning whose key assumption is that people condition their evaluation of new information on its perceived uncertainty. This assumption seems consistent with core Bayesian principles.
We are open to the idea that neurotypical belief formation is not Bayesian. Indeed, we agree with Williams and others that there are compelling reasons to think that it is not so. We just believe that classic evidence of the backfire effect, confirmation bias and motivated reasoning is not one of these reasons.