Skip to main content

Persuasion and Self Persuasion

This post is by Joël van der Weele and Peter Schwardmann.




Joël (picture above) is an associate professor at the Center for Research in Experimental Economics and political Decision making (CREED) at the University of Amsterdam, and a fellow at the Tinbergen Institute and the Amsterdam Brain and Cognition center. His research is takes place on the intersection between economics and psychology, using the tools of experimental economics and game theory. Topics include motivated cognition in economic decisions, the interaction of laws and social norms and the measurement of beliefs.



Peter (picture above) is a behavioural economist at LMU Munich. He works on belief formation and the consequences of belief biases in markets.

As readers of this blog will probably know, belief formation does not always reflect a search for truth. According to an “interactionist view” of cognition, the production of arguments and the persuasion of others leads beliefs to become conveniently aligned with the position one represents. Two theories underpinning this view have received quite some attention, following back-to-back exposés in the journal Behavioral and Brain Sciences in 2011.

In the first, Hugo Mercier and Dan Sperber argue that the way we reason is shaped by our desire to come up with arguments to persuade other people. A by-product of persuasion is that we end up persuading ourselves. In the second, evolutionary biologist Robert Trivers and psychologist Bill von Hippel expand on Trivers’ theory that we have evolved the capacity to self-deceive in order to better deceive others.

While philosophers and social scientists have debated these theories at length, their value will ultimately be determined by empirical tests. Conducting such tests is the aim of two of our recent papers. In the first, published in 2019 in Nature Human Behavior, we investigated whether Von Hippel and Trivers’ self-deception theory can explain the emergence of overconfidence, a ubiquitous cognitive bias. In the experiment, subjects performed an intelligence test. Later on, subjects could earn additional money by convincing independent evaluators of their superior performance.

To investigate self-deception, we compared the beliefs of two groups of subjects. One group was told beforehand about the upcoming opportunity for persuasion, while the other group (control) was not. This difference should not affect how subjects viewed their past performance, unless self-confidence is driven by the wish to persuade others. To measure “true” beliefs, we let subjects bet on their own performance on an intellectual test, thereby putting money at stake for reporting their accurate beliefs.

We found that being informed of the opportunity to make money from persuading others increases subjects’ confidence. Furthermore, we find causal evidence that confidence about performance, which we manipulated during the experiment, helps people to be more persuasive through both verbal and non-verbal channels. In the meantime, other papers (see here and here) show results going in the same direction.

In a novel preprint, we, together with our co-author Egon Tripodi, investigate Mercier and Sperber’s argument that our beliefs are driven by the need to argue. We do so in a field experiment at international debating competitions, where the persuasion motive is of central importance. During the competition, debating teams are randomly assigned to debate pro and contra positions. This allows us to identify the effect of having to argue for a position, and eliminates self-selection into positions - an issue in many datasets. In a large number of surveys, we elicit debaters’ opinions and attitudes, again putting money at stake to incentivize true reporting.

We find that having to defend a (randomly assigned!) position causes people to “self-persuade” along several dimensions: a) beliefs about factual statements become more conveniently aligned with the debater’s side of the motion, b) attitudes shift as well, reflected in an increased willingness to donate to goal-aligned charities, and c) both sides are overconfident in the strength of their position in the debate. We measure this self-persuasion right before the debate starts, but the subsequent exchange of arguments does not lead to significant convergence in beliefs and attitudes.

While more research is necessary to confirm these results, they show that the desire to argue and persuade is indeed an important driver of overconfidence and opinion formation. More generally, they support a view of cognition that assigns a central role to human interaction and the wish to persuade others.


Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph