Tuesday 28 July 2020

“If this account is true it is most enormously wonderful”

Today's post is by Sacha Altay, Emma de Araujo and Hugo Mercier.

Sacha Altay is doing his PHD thesis at the Jean Nicod Institute on misinformation from a cognitive and evolutionary perspective. Emma de Araujo is a master student in Sociology doing an internship at the Jean Nicod Institute in the Evolution and Social Cognition Team. Hugo Mercier is a research scientist at the CNRS (Jean Nicod Institute) working on argumentation and how we evaluate communicated information. 


Why do people share fake news? We believe others to be more easily swayed by fake news than us (Corbu et al., 2020; Jang & Kim, 2018), thus an intuitive explanation is that people share fake news because they are gullible. It makes sense that if people can’t tell truths from falsehoods, they will inadvertently share falsehoods often enough. In fact, laypeople are quite good at detecting fake news (Pennycook et al., 2019, 2020; Pennycook & Rand, 2019) and, more generally, they don’t get fooled easily (Mercier, 2020). However, despite this ability to spot fake news, people do share some news they suspect to be inaccurate (Pennycook et al., 2019, 2020). Why would they do that?


 

One explanation is that people share inaccurate information by mistake, because they are lazy or distracted (Pennycook et al., 2019; Pennycook & Rand, 2019). Indeed, a rational mind should only share accurate information, right? Not so fast. First, laypeople are not professional journalists, they share news for a variety of reasons, such as bonding with peers or having a laugh. Second, even when one’s goal is to inform others, accuracy alone is not enough.

How informative do you find the following (true) news “This morning a pigeon attacked my plants”? Now consider these fake news stories “COVID-19 is a bioweapon released by China” and “Drinking alcohol protects from COVID-19.” If true, the first one might start another world war, and the second one would end the pandemic in a massive and unprecedent international booze-up. Despite being implausible, as long as one is not entirely sure that such fake news is inaccurate, it has some relevance and sharing value.



In a recent article, we tested whether, as suggested above, the “interestingness-if-true” of a piece of information can in part make up for its questionable accuracy. In two pre-registered experiments, we had 600 online participants rating how willing they would be to share a series of true and fake news, and rate how accurate and interesting-if-true the pieces of news were. Participants were more willing to share news they found interesting-if-true, or accurate.


In addition, fake news was deemed much less accurate than true news but also more interesting-if-true.

 

 


Our results suggest that people may not share fake news because they are gullible, distracted or lazy, but instead because fake news has qualities that make up for its relative inaccuracy, such as being more interesting-if-true. Yet people likely share news they think might not be accurate for a nexus of reasons beside its interestingness-if-true (see, e.g., Kümpel et al., 2015; Petersen et al., 2018; Shin & Thorson, 2017). For instance, older adults share more fake news than younger adults despite being better than them at detecting fake news, probably because “older adults often prioritize interpersonal goals over accuracy” (Brashier & Schacter, 2020, p. 4).

In the end, we should keep in mind that (i) being accurate is not the same thing as being interesting: accuracy is only one part of relevance; (ii) sharing is not the same thing as believing, people share things they don’t necessarily hold to be true; (iii) sharing information is a social behavior motivated by a myriad of factors, and informing others is only one of them.

Tuesday 21 July 2020

How to tackle discomfort when confronting implicit biases

Ditte Marie Munch-Jurisic (Photo ©Dorte_Jelstrup)

Today's post is by Ditte Marie Munch-Jurisic, who is a postdoc at the Section for Philosophy and Science Studies at Roskilde University, Denmark.


It has become quite trendy to argue that that it is okay (or maybe even required) to make people feel uncomfortable because of their biases or prejudices. In my new paper in Ethical Theory and Moral Practice, The Right to Feel Comfortable: Implicit Bias and the Moral Potential of Discomfort, I discuss this new trend by arguing that there are good reasons (from affective neuroscience) for why we should curb our enthusiasm when it comes to the moral potential of discomfort. It certainly can be justified to call people out for their biased behavior and we need not comfort every display of what I call “awareness discomfort”. But in such situations we shouldn’t expect to be changing the receiver’s moral mindset. 

This is the first out of two papers on the feelings of discomfort generated by implicit biases and other subtle forms of discrimination (coming out of my research project funded by the Carlsberg Foundation). In another upcoming paper (Against Comfort: Political Implications of Evading Discomfort) I am more in favor of discomfort and argue that we should be ready to accept more of this kind of bias discomfort if we want to advance social mobility.

From psychologists advising on so-called implicit bias trainings to comedian Hannah Gadsby’s special, Nanette, which challenged conventions of stand-up comedy, speakers are increasingly confronting audiences with their complicity in structural forms of discrimination. Despite widespread scholarly interest in the moral potential of discomfort, there has been surprisingly little discussion of its potential pitfalls. Although discomfort advocates range from killjoys who endorse intentional and direct confrontation to more moderate voices who carve out careful distinctions between productive and unproductive forms discomfort, only a few voices have directly called attention to the aversive effects of discomfort.

Such discomfort skeptics warn that, because people often react negatively to feeling blamed or called-out, the result of confrontational approaches is often counterproductive. To deepen this critique, I draw in the paper on the recent upsurge of research on negative affect and emotions in the affective sciences and philosophy of emotion to argue for a contextual understanding of discomfort that accounts for the complex phenomenology of aversive affect.

My primary aim is to caution against the current wave of discomfort advocacy. Advocates risk overrating the moral potential of discomfort if they underestimates the extent to which context shapes the interpretation of affect and simple, raw feelings. Context in this sense entails two dimensions: (i) the concrete situation of individual agents and (ii) the internal tools and concepts they use to interpret their discomfort. Rudimentary affect like discomfort does not necessarily have a transparent, straightforward intentionality.

Put simply, agents may not know precisely why they feel uncomfortable. Their specific situations and the interpretative tools they use to discern their discomfort are central to how they will understand their discomfort and the motivations they will draw from the experience. Affect—and especially negative affect like discomfort—has an paramount and often unpredictable influence on our judgments, behavior and understanding of the world. From the perspective of the contextual approach, a critical problem for discomfort advocates is that they risk ignoring the multiple kinds of discomfort that may arise in discussions of implicit bias.

Tuesday 14 July 2020

Implicit Bias: Knowledge, Justice, and the Social Mind

Today's post is by Alex Madva (California Center for Ethics & Policy, California State Polytechnic University) who is introducing a new book co-edited with Erin Beeghly (University of Utah), entitled An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge 2020).


Alex Madva

In the wake of what might be the largest protests in American history responding to police and vigilante brutality against the black community, the point – or pointlessness – of “Implicit Bias Training” has taken on renewed urgency. Although I do implicit bias training myself, my co-editor Erin Beeghly and I share critics’ concerns: the trainings are “too short, too simplistic,” and too often function just to let organizations “check a box” to protect against litigation, rather than spark real change. 


Erin Beeghly


But “training” is just another word for “education,” and all kinds of education can be done well or poorly. If implicit bias is one important piece of a large and complex puzzle, then education about it—when done right—has a meaningful role to play in helping us understand problems and point toward solutions.

So what is implicit bias? How does it compromise our knowledge of others? How does it affect us, not just as individuals but as participants in larger social institutions? And what can we do to combat it? An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind engages these questions in non-technical terms. Each chapter includes discussion questions and annotated recommendations for further reading, and a companion webpage links to further teaching resources (podcasts, films, online activities…)





We are especially proud of the range of social and philosophical perspectives represented by our authors (chapter summaries here), many of whom are coming soon to a blogosphere near you…

Next week over at PEA Soup:

Then head over on the following week (July 27th-30th) to the Brains Blog:

Also note that many chapters address policing and criminal justice, from summarizing findings that black men are perceived to be more threatening than (similarly sized) white men, to asking when it’s “reasonable” for officers to use lethal force, to drawing lessons for individual and structural interventions from the M4BL platform, to teaching in prisons.

But I’ll close with three specific examples from authors who won’t be blogging this go-round…

In “Skepticism about Bias,” Michael Brownstein considers arguments that apparent inequalities in criminal justice don’t really exist; or that they exist but aren’t unjust; or that they’re unjust but aren’t explained by implicit bias.

In “Stereotype Threat, Identity, and the Disruption of Habit,” Nathifa Greene calls for overhauling our understanding of stereotype threat, beyond studies on student test-taking, which replicate inconsistently at best. Instead, on Greene’s alternative view, “The increased self-consciousness of stereotype threat” is operative “when police, security guards, and vigilante citizens surveil and monitor activities like shopping… with the deadly risk of encounters with police as passersby stereotyped someone who was sleeping, socializing, or playing while black.

Finally, in “Explaining Injustice: Structural Analysis, Bias, and Individuals,” Saray Ayala-López and Erin Beeghly explore the complex intertwining of structural injustice and individual bias—including how college campuses with prominent Confederate monuments have higher levels of implicit bias. So, yeah, in case you needed it, there’s another reason to tear them all down.

Tuesday 7 July 2020

Unimpaired Abduction to Alien Abduction

Today’s post is by Ema Sullivan-Bissett, who is a Senior Lecturer in Philosophy at the University of Birmingham. Here she overviews her paper ‘Unimpaired Abduction to Alien Abduction: Lessons on Delusion Formation’, recently published in Philosophical Psychology. Last year, when millions of people had marked themselves as attending a storming of Area 51, Ema also wrote about her research for the Birmingham Perspective.


In the academic year 2013–14, I was a Postdoctoral Research Fellow on Lisa Bortolotti’s AHRC project on the Epistemic Innocence of Imperfect Cognitions. Towards the end of the Project, I was extremely fortunate to have the opportunity to be a Visiting Researcher at Macquarie University’s ARC Centre of Excellence in Cognition and Its Disorders.

Professor John Sutton hosted me for that month, but I was also lucky to spend some time with Professor Max Coltheart, and interviewed him for this blog. In the first part of the interview we talked about delusion formation, and in the second, we talked about alien abduction belief. I had thought a bit about this phenomenon prompted by some comments in Max’s paper ‘Delusional Belief’ (2011), and talking to Max about this really helped me get clear on what I wanted to say about it.

What resulted was my paper ‘Unimpaired Abduction to Alien Abduction: Lessons on Delusion Formation’, in which I argue that there is much to learn about this particular case of bizarre belief for what we should say about how monothematic delusions are formed, specifically, that the one-factor account ought to be the way we approach explaining monothematic delusion formation.


Ema Sullivan-Bissett


So what is alien abduction belief? Well, many people believe that they have been abducted by aliens. Often, they believe that particular events occurred as a part of this overall experience. For example, some abductees believe that they were taken aboard spaceships, subject to medical experimentation, formed sexual relationships and produced hybrid offspring with aliens, or received important information about the fate of the Earth. Clearly, this is a bizarre belief, but what does it have to do with monothematic delusions more generally?

To answer that question we need to summarise the state of play regarding the debate between one- and two-factor theorists of delusion formation. Psychologists and cognitive neuroscientists have argued that subjects with monothematic delusions have anomalous experiences in which these beliefs are rooted, but few (with the exception of Brendan Maher) take these strange experiences to be the only clinical factor. This is the one-factor approach.

The more popular view has it that there is a second clinical factor such as a cognitive deficit, bias, or performance error. This view is motivated—at least in part—by the fact that there are some subjects who have the anomalous experience associated with a delusion of a particular kind (e.g. the lack of affective response to faces typical of Capgras delusion), but do not themselves have the delusional belief (i.e. that their loved one has been replaced by an imposter). Two-factor theorists claim that a second factor is needed to explain why the anomalous experience prompts a delusional explanation in only some cases.

Alien abduction belief is interesting because theorists seeking to explain why people believe that they have been abducted by aliens have not sought to identify a cognitive abnormality, even whilst recognising that there are subjects who have experiences associated with alien abduction beliefs and who nevertheless do not believe that they were abducted by aliens. Rather, these researchers seek to identify a variety of normal-range cognitive processes that may contribute to the explanation of the generation of abduction beliefs. I argue that we have no reason to doubt the explanatory value of this research methodology, were we to equip ourselves with it, and turn to monothematic delusions more generally.

I conclude that the one-factor position ought to be the default approach for understanding delusion formation, and that those theorists interested in understanding the formation of delusional beliefs have much to learn from the case of alien abduction belief. To put it in the crudest terms, in the presence of anomalous experiences, it is normal for humans to have bizarre beliefs.