Tuesday 26 February 2019

Response to Ben Tappin and Stephen Gadsby

In this post, Daniel Williams, Postdoctoral Researcher in the Centre for Philosophical Psychology at the University of Antwerp, responds to last week's post from Ben Tappin and Stephen Gadsby about their recent paper "Biased belief in the Bayesian brain: A deeper look at the evidence". 


Ben Tappin and Stephen Gadsby have written an annoyingly good response to my paper, ‘Hierarchical Bayesian Models of Delusion’. Among other things, my paper claimed that there is little reason to think that belief formation in the neurotypical population is Bayesian. Tappin and Gadsby—along with Phil Corlett, and, in fact, just about everyone else I’ve spoken to about this—point out that my arguments for this claim were no good.

Specifically, I argued that phenomena such as confirmation bias, motivated reasoning and the so-called “backfire effect” are difficult to reconcile with Bayesian models of belief formation. Tappin and Gadsby point out that evidence for the backfire effect suggests that it is extremely rare, that confirmation bias as traditionally understood can be reconciled with Bayesian models, and that almost all purported evidence of motivated reasoning can be captured by Bayesian models under plausible assumptions.

To adjudicate this debate, one has to step back and ask: what kind of evidence *would* put pressure on Bayesian models of belief formation? Unfortunately this debate is often mired by reference to concepts like logical consistency and inconsistency (i.e. falsification), which are largely irrelevant to science. (In fact, they are profoundly un-Bayesian). As I mentioned in my paper, with suitable adjustments to model parameters, Bayesian models can be fitted to—that is, made logically consistent with—any data. 

The question is: which possible evidence should *weaken our confidence* in Bayesian models? Fortunately, Tappin and Gadsby don’t hold the view—surprisingly widespread in this debate—that there is nothing we could discover which should weaken our confidence in them. They concede, for example, that any genuine evidence for “motivated reasoning constitutes a clear challenge… to the assumption that human belief updating approximates Bayesian inference.”

If that’s right, Tappin and Gadsby face an uphill struggle. Motivated cognition—the influence of our emotions, desires, and (I argue, at least) group identities on belief formation—seems to be pervasive. It is reflected in many phrases of commonsense psychology: “denial,” “wishful thinking,” “burying your head in the sand,” “drinking your own kool aid,” and so on. 

Consider a well-known phenomenon: the “good news-bad news effect,” the fact that belief updating is often more sensitive to the reception of good news than bad news. For example, in a famous study experimenters first extracted subjects’ prior beliefs about their relative IQ and physical attractiveness (to members of the other sex), and then exposed them to new information (actual IQ scores and ratings from members of the other sex). The authors of the study describe the results as follows:

“[S]ubjects incorporated favourable news into their existing beliefs in a fundamentally different manner than unfavourable news. In response to favourable news, subjects tended to…adhere quite closely to the Bayesian benchmark, albeit with an optimistic bias. In contrast, subjects discounted or ignored signal strength in processing unfavourable news, which led to noisy posterior beliefs that were nearly uncorrelated with Bayesian inference.”

Tappin and Gadsby argue that such studies “preclude the inference that motivation causes the observed patterns of information evaluation.” Why? Because one can construct a Bayesian model to accommodate that data. I’m not sure that this is right. For example, the experimenters explicitly collect prior beliefs, and the evaluation of the same kind of evidence (e.g. IQ test scores) is dependent on its favourability. 

Even if Tappin and Gadsby are correct when it comes to the issue of logical consistency, however, the question is this: is a Bayesian model plausible? Are people in such circumstances updating beliefs with the help of an optimal statistical inference engine inside their heads, impervious to the influence of their emotions, hopes, desires and identities? I think we should be sceptical. Consider one of the core findings of the experiment, for example: some of the subjects delivered bad news were actively willing to pay to avoid receiving new information. Perhaps there is a Bayesian explanation of burying your head in the sand, but I’m not sure what it would be.

1 comment:

  1. One line of research suggests that our capacity for Bayesian beliefs is subject to evolutionary constraints (Haselton & Nettle, 2006). Error management theory claims systematic departures from Bayesian beliefs can be adaptive if judgments are made under uncertainty, and the cost of a false positive is greater than a false negative. For example, Raihani and Bell (2018) argue that paranoia (harmful intent attributions) has evolved as an adaptive mechanism to detect and avoid coalitionary threat when the probability and costs of harm from others are high, being more common in certain environments than others. Another example could be the classic sexual overperception in men which is suggested to have evolved because wasted opportunities are costlier than rejections (Haselton, 2003). Also, if belief formation in the neurotypical mind would have been non-Bayesian, it would be reasonable to expect this phenomenon to be documented in both sexes.

    Perhaps phenomena such as confirmation bias, motivated cognition, the “backfire effect”, paranoia, sexual overperception (and many others) can be considered the same side of a coin. Their occurrence does not mean that belief formation in the neurotypical mind is not Bayesian. But that from an evolutionary psychological perspective, selection favoured such departures from Bayesian beliefs as under certain circumstance they are necessary and offer more benefits.

    Another example suggests that due to complexity or volume of relevant information, and time constraints, people cannot base judgments on exhaustive strategies or infinite sampling (Caplin, Dean, & Martin, 2011; Sanborn & Chater, 2016). Also, the accuracy of Bayesian inference is limited by our understanding of the evidence quality, i.e., how well it indicates the truth (or falsity) of our hypothesis (Hahn, Merdes, & von Sydow, 2018). Thus, many decisions are made without fully examining all available options. Most people stop information search when an environmentally determined satisficing level of reservation utility is met. Thus, humans are generally satisficers making reasonable inferences, than optimizers making the best inferences possible.

    With regards to motivated cognition, source credibility seems to be important in people’s evaluation of new information. The perceived reliability of a source and the false/true positive/negative rates of a source influences people’s evaluation of the information communicated, therefore their posterior beliefs. This is reflected in the political domain where laypeople across the political spectrum consider mainstream sources more trustworthy than either hyper-partisan or fake news sources (Pennycook & Rand, 2019). Which means they are able to distinguish between lower- and higher-quality news sources, and, therefore, sample information from more accurate sources leading to more accurate posterior beliefs. This is consistent with Bayesian principles.

    Other recent findings demonstrate that contrary to popular accounts, cognitive sophistication is not employed to protect the political identity when reasoning or updating beliefs about either politically concordant or discordant information (Tappin, Pennycook, & Rand, 2018). On the contrary, more analytical individuals deviated less from Bayesian posterior beliefs. Furthermore, intellectual humility is essential for avoiding confirmation biases when reasoning about evidence and evaluating beliefs. Research indicates that cognitive flexibility and intelligence predict intellectual humility (Zmigrod et al., 2019). So perhaps belief updating which approximates Bayesian inference requires certain levels of intellectual capability as well in order to overpower the strength and influence of affect, group identity and other factors, including ‘burying your head in the sand’.

    ReplyDelete

Comments are moderated.