In this post, Daniel Williams, Postdoctoral Researcher in the Centre for Philosophical Psychology at the University of Antwerp, responds to last week's post from Ben Tappin and Stephen Gadsby about their recent paper "Biased belief in the Bayesian brain: A deeper look at the evidence".
Ben Tappin and Stephen Gadsby have written an annoyingly good response to my paper, ‘Hierarchical Bayesian Models of Delusion’. Among other things, my paper claimed that there is little reason to think that belief formation in the neurotypical population is Bayesian. Tappin and Gadsby—along with Phil Corlett, and, in fact, just about everyone else I’ve spoken to about this—point out that my arguments for this claim were no good.
Specifically, I argued that phenomena such as confirmation bias, motivated reasoning and the so-called “backfire effect” are difficult to reconcile with Bayesian models of belief formation. Tappin and Gadsby point out that evidence for the backfire effect suggests that it is extremely rare, that confirmation bias as traditionally understood can be reconciled with Bayesian models, and that almost all purported evidence of motivated reasoning can be captured by Bayesian models under plausible assumptions.
To adjudicate this debate, one has to step back and ask: what kind of evidence *would* put pressure on Bayesian models of belief formation? Unfortunately this debate is often mired by reference to concepts like logical consistency and inconsistency (i.e. falsification), which are largely irrelevant to science. (In fact, they are profoundly un-Bayesian). As I mentioned in my paper, with suitable adjustments to model parameters, Bayesian models can be fitted to—that is, made logically consistent with—any data.
To adjudicate this debate, one has to step back and ask: what kind of evidence *would* put pressure on Bayesian models of belief formation? Unfortunately this debate is often mired by reference to concepts like logical consistency and inconsistency (i.e. falsification), which are largely irrelevant to science. (In fact, they are profoundly un-Bayesian). As I mentioned in my paper, with suitable adjustments to model parameters, Bayesian models can be fitted to—that is, made logically consistent with—any data.
The question is: which possible evidence should *weaken our confidence* in Bayesian models? Fortunately, Tappin and Gadsby don’t hold the view—surprisingly widespread in this debate—that there is nothing we could discover which should weaken our confidence in them. They concede, for example, that any genuine evidence for “motivated reasoning constitutes a clear challenge… to the assumption that human belief updating approximates Bayesian inference.”
If that’s right, Tappin and Gadsby face an uphill struggle. Motivated cognition—the influence of our emotions, desires, and (I argue, at least) group identities on belief formation—seems to be pervasive. It is reflected in many phrases of commonsense psychology: “denial,” “wishful thinking,” “burying your head in the sand,” “drinking your own kool aid,” and so on.
If that’s right, Tappin and Gadsby face an uphill struggle. Motivated cognition—the influence of our emotions, desires, and (I argue, at least) group identities on belief formation—seems to be pervasive. It is reflected in many phrases of commonsense psychology: “denial,” “wishful thinking,” “burying your head in the sand,” “drinking your own kool aid,” and so on.
Consider a well-known phenomenon: the “good news-bad news effect,” the fact that belief updating is often more sensitive to the reception of good news than bad news. For example, in a famous study experimenters first extracted subjects’ prior beliefs about their relative IQ and physical attractiveness (to members of the other sex), and then exposed them to new information (actual IQ scores and ratings from members of the other sex). The authors of the study describe the results as follows:
“[S]ubjects incorporated favourable news into their existing beliefs in a fundamentally different manner than unfavourable news. In response to favourable news, subjects tended to…adhere quite closely to the Bayesian benchmark, albeit with an optimistic bias. In contrast, subjects discounted or ignored signal strength in processing unfavourable news, which led to noisy posterior beliefs that were nearly uncorrelated with Bayesian inference.”
Tappin and Gadsby argue that such studies “preclude the inference that motivation causes the observed patterns of information evaluation.” Why? Because one can construct a Bayesian model to accommodate that data. I’m not sure that this is right. For example, the experimenters explicitly collect prior beliefs, and the evaluation of the same kind of evidence (e.g. IQ test scores) is dependent on its favourability.
“[S]ubjects incorporated favourable news into their existing beliefs in a fundamentally different manner than unfavourable news. In response to favourable news, subjects tended to…adhere quite closely to the Bayesian benchmark, albeit with an optimistic bias. In contrast, subjects discounted or ignored signal strength in processing unfavourable news, which led to noisy posterior beliefs that were nearly uncorrelated with Bayesian inference.”
Tappin and Gadsby argue that such studies “preclude the inference that motivation causes the observed patterns of information evaluation.” Why? Because one can construct a Bayesian model to accommodate that data. I’m not sure that this is right. For example, the experimenters explicitly collect prior beliefs, and the evaluation of the same kind of evidence (e.g. IQ test scores) is dependent on its favourability.
Even if Tappin and Gadsby are correct when it comes to the issue of logical consistency, however, the question is this: is a Bayesian model plausible? Are people in such circumstances updating beliefs with the help of an optimal statistical inference engine inside their heads, impervious to the influence of their emotions, hopes, desires and identities? I think we should be sceptical. Consider one of the core findings of the experiment, for example: some of the subjects delivered bad news were actively willing to pay to avoid receiving new information. Perhaps there is a Bayesian explanation of burying your head in the sand, but I’m not sure what it would be.