Skip to main content

Response to Ben Tappin and Stephen Gadsby

In this post, Daniel Williams, Postdoctoral Researcher in the Centre for Philosophical Psychology at the University of Antwerp, responds to last week's post from Ben Tappin and Stephen Gadsby about their recent paper "Biased belief in the Bayesian brain: A deeper look at the evidence". 


Ben Tappin and Stephen Gadsby have written an annoyingly good response to my paper, ‘Hierarchical Bayesian Models of Delusion’. Among other things, my paper claimed that there is little reason to think that belief formation in the neurotypical population is Bayesian. Tappin and Gadsby—along with Phil Corlett, and, in fact, just about everyone else I’ve spoken to about this—point out that my arguments for this claim were no good.

Specifically, I argued that phenomena such as confirmation bias, motivated reasoning and the so-called “backfire effect” are difficult to reconcile with Bayesian models of belief formation. Tappin and Gadsby point out that evidence for the backfire effect suggests that it is extremely rare, that confirmation bias as traditionally understood can be reconciled with Bayesian models, and that almost all purported evidence of motivated reasoning can be captured by Bayesian models under plausible assumptions.

To adjudicate this debate, one has to step back and ask: what kind of evidence *would* put pressure on Bayesian models of belief formation? Unfortunately this debate is often mired by reference to concepts like logical consistency and inconsistency (i.e. falsification), which are largely irrelevant to science. (In fact, they are profoundly un-Bayesian). As I mentioned in my paper, with suitable adjustments to model parameters, Bayesian models can be fitted to—that is, made logically consistent with—any data. 

The question is: which possible evidence should *weaken our confidence* in Bayesian models? Fortunately, Tappin and Gadsby don’t hold the view—surprisingly widespread in this debate—that there is nothing we could discover which should weaken our confidence in them. They concede, for example, that any genuine evidence for “motivated reasoning constitutes a clear challenge… to the assumption that human belief updating approximates Bayesian inference.”

If that’s right, Tappin and Gadsby face an uphill struggle. Motivated cognition—the influence of our emotions, desires, and (I argue, at least) group identities on belief formation—seems to be pervasive. It is reflected in many phrases of commonsense psychology: “denial,” “wishful thinking,” “burying your head in the sand,” “drinking your own kool aid,” and so on. 

Consider a well-known phenomenon: the “good news-bad news effect,” the fact that belief updating is often more sensitive to the reception of good news than bad news. For example, in a famous study experimenters first extracted subjects’ prior beliefs about their relative IQ and physical attractiveness (to members of the other sex), and then exposed them to new information (actual IQ scores and ratings from members of the other sex). The authors of the study describe the results as follows:

“[S]ubjects incorporated favourable news into their existing beliefs in a fundamentally different manner than unfavourable news. In response to favourable news, subjects tended to…adhere quite closely to the Bayesian benchmark, albeit with an optimistic bias. In contrast, subjects discounted or ignored signal strength in processing unfavourable news, which led to noisy posterior beliefs that were nearly uncorrelated with Bayesian inference.”

Tappin and Gadsby argue that such studies “preclude the inference that motivation causes the observed patterns of information evaluation.” Why? Because one can construct a Bayesian model to accommodate that data. I’m not sure that this is right. For example, the experimenters explicitly collect prior beliefs, and the evaluation of the same kind of evidence (e.g. IQ test scores) is dependent on its favourability. 

Even if Tappin and Gadsby are correct when it comes to the issue of logical consistency, however, the question is this: is a Bayesian model plausible? Are people in such circumstances updating beliefs with the help of an optimal statistical inference engine inside their heads, impervious to the influence of their emotions, hopes, desires and identities? I think we should be sceptical. Consider one of the core findings of the experiment, for example: some of the subjects delivered bad news were actively willing to pay to avoid receiving new information. Perhaps there is a Bayesian explanation of burying your head in the sand, but I’m not sure what it would be.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...