Skip to main content

The Argumentative Theory of Reasoning





This post is by Hugo Mercier, Cognitive Scientist (French National Center for Scientific Research) and co-author (with Dan Sperber) of The Enigma of Reason. In this post, he discusses the argumentative theory and refers to some of his most recent publications (1; 2; 3). 

It is easy nowadays to find long lists of biases (such as this one). In turn, these lists of biases have given rise to numerous attempts at debiasing. The popular system 1 / system 2 framework has been useful in framing these attempts at debiasing. System 1 would be a set of cognitive mechanisms that deliver quick, effortless intuitions, which tend to be correct but are prone to systematic mistakes. System 2 would be able to correct these intuitions through individual reflection. Teaching critical thinking, for instance, can then be thought of as a way of strengthening system 2 against system 1.

The problem is that, as Vasco Correia noted in a recent post, debiasing attempts, including the teaching of critical thinking, have not been quite as successful as we might like. He suggests that instead of trying to change individual cognition, we should manipulate the environment to make the best of the abilities we have.

Essentially, this is the point that Maarten Boudry, Fabio Paglieri, Emmanuel Trouche, and myself have made in a recent article. We ground our analysis in the argumentative theory of reasoning. According to this theory, reasoning is not a system 2 like homunculus that would be able to oversee other cognitive mechanisms. Instead, it is just another intuitive mechanism among many others. Its specificity is to bear on reasons: reasoning evaluates and finds reasons. By contrast, the vast majority of our inferences go one without any reasons being processed.

According to the argumentative theory of reasoning, the function of human reasoning is—as the name suggests—to argue. Reasoning would have evolved so that people can exchange arguments. When people disagree, they can then try to convince each other, and evaluate each others’ arguments, so that whoever had the best idea to start with is more likely to carry the day.

This theory nicely explains why many biases observed in the lab aren’t easily fixed by reasoning: because they are biases of reasoning. In particular, the confirmation bias—or, more rightly, the myside bias—is specific to reason. Because of this myside bias, reasoning mostly produces reasons that support one’s initial intuitions. Even if initial intuitions are misguided, they are more likely to end up being bolstered by reasoning than corrected. Unsurprisingly, individual reasoning does, by and large, a poor job of correcting mistaken intuitions.

What to do then? Try to use reasoning in the context it evolved to work in: that of a discussion between people who disagree about something while sharing an overall goal—to solve a problem, reach more accurate beliefs, etc. Such discussions improve performance in a wide variety of tasks, from medical diagnoses to economic forecasts, from logical problems to school tasks.

Ironically, doing so might end up improving solitary reasoning as well. When we exchange arguments with others, we are exposed to counter-arguments. With repeated exposure, we learn to anticipate counter-arguments, and this might help attenuate the myside bias.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph