Tuesday 25 February 2020

Resistance to Belief Change: Limits of Learning

Today's post is by Joseph Lao (Columbia University and CUNY) and Jason Young (CUNY) who introduce their new book, Resistance to Belief Change (Routledge 2019).

The general perspective of our book may best be described as doxastic psychology. We share with the doxastic philosophers and Jean Piaget’s genetic epistemology an interest in the genesis and transformation of our beliefs. We differ from both however, in our particular focus on issues of embeddedness and entrenchment, and in our careful examination of a broad range of psychological factors, including emotional, cognitive, social, and physical factors, that cause us to resist changing our beliefs and impede our achievement of epistemic sainthood.

We avoid the assumption that resistance to belief change in response to evidence that contradicts our beliefs is necessarily irrational. We note several examples of how such resistance may be “illogical” yet rational, such as when we lack a superior alternative to our existing beliefs, or when a false belief accrues positive consequences. An example of the latter occurs when one spouse convinces the other that he cannot cook and thereby gains freedom from the responsibility of cooking.

We construe our tendency to resist belief change as a general tendency that is manifested in many specific ways, across a broad array of human domains. We identify many, often unintentional, and even unconscious, “mechanisms” by which we preserve our beliefs intact. For example, we are biased to search for evidence that supports beliefs we wish to be true rather than evidence that contradicts our cherished beliefs. There is ample evidence to support the view that it is easier for us to form beliefs than to discard them, that is, the latter literally requires more cognitive effort.

In addition, the phenomenon known as “inattentional blindness” is mediated by our parietal lobe and may literally blind us to evidence that contradicts some of our beliefs. Socially, we associate with other people partly on the basis of shared beliefs. In some cases, the costs of discarding a shared social belief may include being ostracized by an important part of our social support network (such as our church or political party). But we are a social species, for which such ostracism may be at least unpleasant, and possibly fatal. Finally, we note that our beliefs are mediated by still poorly understood neural structures. Once a neural structure constituting a belief has been formed, it becomes more and more firmly embedded as it gets used and reused, thereby offering more and more physical resistance to change. 

Joseph Lao
In spite of our natural tendency to resist change, we do of course change. Change is the way we grow, and we definitely do grow. Therefore, in the last two chapters of our book we offer ideas for overcoming resistance, either as self-directed learners or as “teachers.” Drawing on psychological research, philosophy and the philosophy of science, we suggest that establishing clear, reasonable, epistemic standards, committing to an open mind, and epistemic integrity are effective ways to overcome resistance in ourselves, while having learners collaborate in social problem solving are effective ways to help others overcome resistance.

Tuesday 18 February 2020

Persuasion and Self Persuasion

This post is by Joël van der Weele and Peter Schwardmann.

Joël (picture above) is an associate professor at the Center for Research in Experimental Economics and political Decision making (CREED) at the University of Amsterdam, and a fellow at the Tinbergen Institute and the Amsterdam Brain and Cognition center. His research is takes place on the intersection between economics and psychology, using the tools of experimental economics and game theory. Topics include motivated cognition in economic decisions, the interaction of laws and social norms and the measurement of beliefs.

Peter (picture above) is a behavioural economist at LMU Munich. He works on belief formation and the consequences of belief biases in markets.

As readers of this blog will probably know, belief formation does not always reflect a search for truth. According to an “interactionist view” of cognition, the production of arguments and the persuasion of others leads beliefs to become conveniently aligned with the position one represents. Two theories underpinning this view have received quite some attention, following back-to-back exposés in the journal Behavioral and Brain Sciences in 2011.

In the first, Hugo Mercier and Dan Sperber argue that the way we reason is shaped by our desire to come up with arguments to persuade other people. A by-product of persuasion is that we end up persuading ourselves. In the second, evolutionary biologist Robert Trivers and psychologist Bill von Hippel expand on Trivers’ theory that we have evolved the capacity to self-deceive in order to better deceive others.

While philosophers and social scientists have debated these theories at length, their value will ultimately be determined by empirical tests. Conducting such tests is the aim of two of our recent papers. In the first, published in 2019 in Nature Human Behavior, we investigated whether Von Hippel and Trivers’ self-deception theory can explain the emergence of overconfidence, a ubiquitous cognitive bias. In the experiment, subjects performed an intelligence test. Later on, subjects could earn additional money by convincing independent evaluators of their superior performance.

To investigate self-deception, we compared the beliefs of two groups of subjects. One group was told beforehand about the upcoming opportunity for persuasion, while the other group (control) was not. This difference should not affect how subjects viewed their past performance, unless self-confidence is driven by the wish to persuade others. To measure “true” beliefs, we let subjects bet on their own performance on an intellectual test, thereby putting money at stake for reporting their accurate beliefs.

We found that being informed of the opportunity to make money from persuading others increases subjects’ confidence. Furthermore, we find causal evidence that confidence about performance, which we manipulated during the experiment, helps people to be more persuasive through both verbal and non-verbal channels. In the meantime, other papers (see here and here) show results going in the same direction.

In a novel preprint, we, together with our co-author Egon Tripodi, investigate Mercier and Sperber’s argument that our beliefs are driven by the need to argue. We do so in a field experiment at international debating competitions, where the persuasion motive is of central importance. During the competition, debating teams are randomly assigned to debate pro and contra positions. This allows us to identify the effect of having to argue for a position, and eliminates self-selection into positions - an issue in many datasets. In a large number of surveys, we elicit debaters’ opinions and attitudes, again putting money at stake to incentivize true reporting.

We find that having to defend a (randomly assigned!) position causes people to “self-persuade” along several dimensions: a) beliefs about factual statements become more conveniently aligned with the debater’s side of the motion, b) attitudes shift as well, reflected in an increased willingness to donate to goal-aligned charities, and c) both sides are overconfident in the strength of their position in the debate. We measure this self-persuasion right before the debate starts, but the subsequent exchange of arguments does not lead to significant convergence in beliefs and attitudes.

While more research is necessary to confirm these results, they show that the desire to argue and persuade is indeed an important driver of overconfidence and opinion formation. More generally, they support a view of cognition that assigns a central role to human interaction and the wish to persuade others.

Tuesday 11 February 2020

Life, Death and Meaning

On 9th September 2019, Yujin Nagasawa organised and hosted a workshop on Life, Death and Meaning – Eastern and Western Perspectives in the Muirhead Tower at the University of Birmingham, in collaboration with researchers at the University of Tokyo and Waseda University.

Muirhead Tower

The first speaker, Norichika Horie (University of Tokyo), presented on Spirituality and Meaning of Life and addressed several themes in our philosophical understanding of meaning. He started from the meaning of meaning. In Chinese and Japanese “imi” (meaning) is about externalising and verbalising something internal and has important links with intention. “Imi” is an emotion that stays in the mouth and doesn’t turn into words, it is affective and preverbal. But is meaning something to be explored or something to be produced?

Norichika Horie

According to Norichika Horie, the relationship between life and death is crucial to what we think about meaning. The story of life ends with death: it stops changing and becomes meaningful as a whole (a bit like a book that needs to be deciphered). The role of trauma is also important to meaning in life. What is the meaning of evil? In some traditions (Buddhism), suffering can be avoided by detaching from the world and reaching Nirvana, a type of death. In other traditions (Christianity), suffering is a source of meaning and growth. In concentration camps, those who did not lose sight of their future goals did not lose the will to live. This suggests that meaning lies in the sense that we feel responsible for our future, not just our past. Death and trauma are opportunities to renew the meaning of life.

Yujin Nagasawa

Next speaker was Yujin Nagasawa (University of Birmingham), talking about Existential Optimism and Evil. He introduced the problem of evil in philosophy of religion—if God is all-powerful and good, why is there evil in the world? One answer is that evil is allowed to enable us to be free to choose: when we choose badly, evil is caused. So, evil is the price to pay for freedom. Another answer is that we do not know so many things about the world and it may be that God has good reasons to allow what seems to us to be evil to happen. Other solutions have been proposed too.

The problem of systemic evil is about the existence of pain and suffering in nature. Darwin thought that there seemed to be too much misery in the world. Nature is like a small cage where many animals are placed together and they desperately fight for the limited resources. Dawkins also described natural selection as a very unpleasant process, saying that he does not want to live in a kind of world where natural selection operates.

In theism, existential optimism is about thanking God for our existence, which is an undeserved personal favour. Atheists like Benatar argue that existing is always painful and it would always be better not to exist. Other atheists are optimists: they mention gratitude to be alive and a sense of wonder. That is why there can be a problem of evil for atheists too: why do we think that the world is good if nature is full of pain and suffering? How can we say that we are happy to be alive if horrible events led to our coming into existence? This version of the problem is more systemic, and cannot be explained by reference to freedom. Also, it concerns not just our existence, but the existence of the world and the existence of other animals.

Can we be happy about our existence while wishing that natural selection did not happen? The world we would like to live in would be too different from the world we live in, because if natural selection did not apply, then laws of nature would not apply either. For atheists the problem is unsurmountable because they identify the world with the material universe. But theists can say that there is something beyond nature that is positive and makes the world overall more positive than negative.

Tuesday 4 February 2020

Ignorant Cognition

Today's post is by Selene Arfini, post doctoral researcher in the Computational Philosophy Laboratory at the Department of Humanities, Philosophy Section at the University of Pavia. She presents her recently published book, Ignorant Cognition: A Philosophical Investigation of the Cognitive Features of Not-Knowing (Springer 2019).

Ignorance, considered without further specifications, is a broad and strange concept. In a way, it is easy to analyze ignorance as something that does not really affect the agent's knowledge: we know A and we are ignorant of B, but the two things are not necessarily related. In another way, we need to face phenomena as misinformation, a highly recognizable form of ignorance that cannot be represented as unaffecting the knowledge of the agent since it has an impact on her/his belief system and understanding.

On the one hand, we instinctively frown upon ignorance if we believe it is purposefully cultivated. On the other hand, we know that we are bound to be ignorant of something: since we cannot know everything, we must rely on others for the expertise we are not able to get in our finite time. Ignorance traditionally bears a negative mark that makes us associate it with fake news, misinformation, and irrational beliefs. At the same time, it can also be associated with an investigative mindset: doubting, researching, and guessing are all ways to acknowledge and take advantage of one's ignorance.

To study ignorance, thus, one needs to choose either one of two strategies: to focus on one aspect or definition, or to approach ignorance as a broad concept, acknowledging its clashing traits and its wide theoretical range. The book Ignorant Cognition aims at following the second solution and at comprehending the fundamental characteristics of ignorance as a cognitively rich term. Indeed, by defying the need for one-line definitions (that necessarily exclude some facets of this concept), the book approaches ignorance as a broad notion that refers to a plurality of cognitively rich phenomena. In particular, the book focuses on three ways ignorance is impactful on the cognition of the human agent.

First, ignorance is taken as a critical element to consider when analyzing the metacognitive capabilities of the subject. So, the first question the book aimed at addressing was: what role does ignorance play when we measure the depth of our doubts, the validity of our knowledge, or the righteousness of our actions?

Second, the book also approaches ignorance as a necessary ground for knowledge. Heuristic reasoning, hypothetical thinking, model-making, and other fruitful cognitive activities need to make use of the agent's ignorance to produce data, learning, and understanding. Following these considerations, it is natural to ask: How these cognitive activities deal with ignorance? Do they eliminate, manage, or preserve it?

A third approach to understanding the cognitive impact of ignorance was highly related to its negative mark as a common-sense word. Ignorance is usually considered dangerous because it does not only affect isolated individuals, but it can also spread and circulate in communities and groups. How does this circulation happen? What are the rules of ignorance-spreading, and how does it affect the social dimensions and possibilities of one's cognition?

Of course, even after addressing these primary questions, the research around the concept of ignorance and its cognitive impact is far from complete. Nevertheless, I believe that by adopting a pragmatical and comprehensive approach to face its complexity, the epistemological study of ignorance will more easily develop new materials, hypotheses, and theories to understand and bridge its fascinating clashing traits.