Skip to main content

Knowledge Resistance: An Interview with Åsa Wikforss

In today's post I interview Åsa Wikforss about her Knowledge Resistance program. Åsa is a professor of theoretical philosophy at Stockholm University, whose research sits at the intersection of philosophy of mind, language and epistemology.

Åsa Wikforss

Kathleen Murphy-Hollies: Hi Åsa. First of all, could you talk a little bit about what the knowledge resistance project is about and what kind of key questions it addresses?

Åsa Wikforss: So it's a large cross disciplinary program with about 30 researchers involved. The full name is ‘Knowledge Resistance: Causes, Consequences and Cures’, and we investigate knowledge resistance from four different disciplinary angles. Philosophically, we do the foundational work of spelling out what we are even talking when we’re talking about knowledge resistance. At a first approximation, we say it's a kind of irrational resistance to evidence, but there's a lot to unpack there. What is the evidence? What kind of irrationality? What kind of resistance? In terms of psychology, we look experimentally at the types of psychological mechanisms involved in resisting the evidence. And how you think of knowledge resistance affects how you design experiments, so we have a close collaboration with the psychologists.

Importantly, knowledge resistance involves psychological mechanisms in interaction with the external environment. What has changed recently isn’t so much the psychology but the environment. In particular, the information environment and the political environment.

For this reason, we also have media and communication scholars who research the new media situation and what that means for how we respond to evidence. They look at how disinformation is spread and the complex role of trust, among other things. And there is a political science team that investigates things like partisanship and polarisation and how those things shape belief formation. From the beginning we thought that we need to have all these disciplines involved, and I think that's proven to be exactly right.

KMH: That all sounds fascinating and very relevant nowadays – so what was it that first got you interested in these issues?

ÅW: Well it was kind of backwards, in that it started with me writing a popular book on these topics. After the political developments in 2016 we saw lack of knowledge, ‘fake news’ and ‘alternative facts’ all have huge political consequences, and I thought this is such philosophical nonsense and I can’t just sit and listen to it! So I wrote a book called ‘Alternative facts: On Knowledge and its Enemies’ where I explore, from the point of view of philosophy and psychology, what's going on!

I also felt very strongly that theories within some parts of the humanities and social sciences which questioned the very idea that there is such a thing as truth or facts added to the unclarity of the situation. The book came out in 2017 in the middle of quickly increasing interest in these issues and I was lecturing everywhere about it. The research on these topics started to really grow around then and I thought, now's the time for a research program which brings all this together! Because the research had been here and there across different disciplines, and no one had brought it together and provided a coherent framework for the study of knowledge resistance.

KMH: As you say, the project so far seems to have only become more and more relevant given political developments in the world. What findings have surprised you the most so far?

ÅW: It’s hard to pick something! Just this morning I read a study by the media group about how conspiracy theories are spread on social media and what kind of platforms are worst for it. You might think that it's all the same but actually the design of various social media platforms matters. Twitter does much better with conspiracy theories but it does worse when it comes to hate and threats. Facebook, on the other hand, is particularly bad for the spread of conspiracy theories. This is important because governments everywhere need to address what we can do to stop this. It might be that tweaking the design matters a lot for how things spread. Another finding by the media group is that people typically learn about fake news from mainstream media. This can be because of fact checking efforts, so it’s well-intentioned, but it shows that there are risks here.

Of particular interest, of course, are questions concerning how to remedy knowledge resistance. The psychologists have been carrying out studies showing that reducing information ambiguity helps. And the political science group has found that people in general are rather skilled at discriminating bad argumentation from good argumentation, even if this is somewhat diminished when it comes to arguments with a strong ideological tendency.

Another interesting thing has been the philosophy of what exactly knowledge resistance is. As we construe it, knowledge resistance involves a form of irrational resistance to available evidence. But people can fail to accept the evidence for other reasons, not because they are irrational but because they have weird background beliefs, perhaps as a result of disinformation. It actually gets very complicated disentangling the two, namely what is knowledge resistance from what is a rational rejection of the evidence. And, how to design experiments to keep these things distinct and avoid confounds.

KMH: This pre-empts my next question which is - what kind or kinds of irrationality do you think are in play when it comes to knowledge resistance?

ÅW: It’s epistemic irrationality, that is the irrationality of belief. Dan Kahan has done a lot of interesting experimental work on motivated reasoning, and in particular on identity protective reasoning which is this idea that we hold on to beliefs that have become marks of identity of the group that we care about. So, if these beliefs are threatened by evidence against them, we will find ways of protecting them and that of course is epistemically irrational, because you don't update beliefs in the light of evidence. But then he also suggests that this is sort of rational because the group is so important to you. Here it is essential to be clear on the distinction between epistemic rationality and practical rationality. Practically it can be rational to resist the evidence if it allows one to reach this goal of being a valued member of the group, but it doesn't mean that it's epistemically rational.

KMH: Do you think that not keeping epistemic and practical rationality distinct can cause problems?

ÅW: There are problems philosophically, but also sort of politically. If it is described as perfectly rational to not alter the beliefs of your group or your conspiracy beliefs because it serves you well, then that obscures what's going on here in a bad way, I think.

A related issue concerns where in the reasoning process the epistemic irrationality is to be located. Going back to the case where a subject fails to accept available evidence in a way that seems irrational but actually is rational given her prior beliefs. Then, usually, there's irrationality ‘upstream’. So, for instance, there might be irrationally placed trust which makes you read and believe bizarre conspiracy sources and as a result you end up with beliefs that make it rational for you to reject evidence from climate scientists. However, if there isn't irrationality upstream either then it's not knowledge resistance, even if the belief seems totally crazy. 

An interesting question is under what conditions this could be the case. One can imagine fundamentalist conditions where the subject lives in an utterly closed, sect-like environment, with no information coming in from outside. Then, you can end up with really bizarre beliefs and totally, but rationally, reject available knowledge. But I think it’s important to stress that fundamentalist conditions are extremely rare. Even in isolated areas in the US where everybody just listens to Fox news, they know that the New York Times exists and that there are other sources ‘out there’ that they do not have reasons to distrust. So, even if someone could in principle have a crazy set of beliefs in a fully rational way, I think that would be an outlier and very rare. But it is of course an empirical question.

KMH: Great. So, last question, what are your future plans or future directions for the program?

ÅW: We are halfway through now, so we have another three years. We’re continuing to develop the cross-disciplinary work because that’s the strength of the program. We have a new volume just out with Routledge called ‘Knowledge Resistance in High-Choice Information Environments’, which brings together people from all the disciplines involved so we're excited about that.

We also have a big mid term conference in August. The conference has the same name as the volume, and it brings together researchers from the four disciplines involved, internal and external. (Info will appear on the Knowledge Resistance website soon). 

KMH: They sound great and I’ll look out for them! Thank you so much for talking with me.



Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph