Tuesday, 29 September 2020

Cognitive Transformation, Dementia, and the Moral Weight of Advance Directives

Today's post is by Em Walsh (McGill University).



Em Walsh



The following is a real-life case study of a woman referred to as Mrs Black (Sokolowski 2018, 45-83). Mrs Black received a diagnosis of mid-stage dementia at the age of eighty-five. Mrs Black’s dementia impacted her ability to recall both the names and faces of her family members. Nevertheless, Mrs Black was noted by nurses who cared for her as always being an exceptionally happy woman, who took great pleasure in her daily activities in the residential care home in which she lived. Whilst in care, however, Mrs Black developed a serious bacterial infection, which posed a risk to her life if left untreated. Mrs Black’s primary caregivers wanted to treat the infection, but Mrs Black’s son noted that she had an advance directive stipulating that if she ever developed a condition which resulted in her inability to recognize her family members, she would not wish to receive any medical treatment to prolong her life. Her advance directive was implemented, and Mrs Black died shortly thereafter, leaving the medical team who cared for her devastated (Sokolowski 2018).

The dominant view in the philosophical literature suggests that advance directives, documents which allow individuals to set out directions for their future medical care in the eventuality that they lose decisional capacity (de Boer et al 2010, 202), ought to hold decisive moral weight. Thus, defenders of this view such as, Ronald Dworkin (1994), Jeff McMahan (2005), and Govind Persad (2019) would maintain that the decision made in the case of Mrs Black was the correct one. The reason for this is that such views suggest documents such as advance directives reflect an individual’s judgements about their own lives and should therefore be given significant moral weight, even when the price of so doing is the life of the individual in question.

In my paper, "Cognitive Transformation, Dementia, and the Moral Weight of Advance Directives", I suggest that the dominant philosophical view does not best align with current clinical practice. In current clinical practice, clinician’s show great reluctance to implement advance directives which undermine the dementia patient’s overall well-being. I put forward a philosophical defence of current clinical practice which gives moral weight to the preferences of dementia patients after the onset of their disease. In particular, I use L.A. Paul’s transformative experience framework [Paul 2016] to argue that having dementia is a cognitive transformative experience and that preference changes which arise from this are legitimate and ought to be given moral-weight in medical decision-making.


This paper has been responded to by various bioethicists, clinicians, lawyers, and psychologists. These responses have also been published in the American Journal of Bioethics, and so too has my own response to these open peer commentaries. I invite those interested in the debate to email me if they have any comments or questions, as I would love to continue the dialogue on this issue further.

Tuesday, 22 September 2020

Intellectual Humility and Prejudice

Today's post is by Matteo Colombo, Kevin Strangmann, Lieke Houkes, Zhasmina Kostadinova and Mark J. Brandt.

Matteo
Matteo

How does intellectual humility relate to prejudice? If I am more intellectually humble than you are, will I also be less prejudiced? Some would say yes. In much of the early monastic Christian tradition, for example, humility is understood as a virtuous form of abasement grounded in self-knowledge and self-appraisal. In his Demonstrations, Aphrahat the Persian Sage—a Syriac Christian author of the third-fourth century—writes that “humility is the dwelling place of righteousness. Instruction is found with the humble, and their lips pour forth knowledge. Humility brings forth wisdom and understanding.” Aphrahat’s suggestion that intellectual humility is the antidote to vanity, pride, and prejudice, is representative of one traditional way of understanding this character trait.


Mark
Mark

Some would say no. The idea is that the self-abnegation and abasement constituting humility are not virtuous, since they can reinforce existing structures of oppression and self-denigration. In Section IX of An Enquiry Concerning the Principles of Morals, David Hume, for example, includes humility in his list of “monkish virtues,” which “stupefy the understanding and harden the heart, obscure the fancy and sour the temper.” According to Hume’s view, being intellectually humble is generally a vice, which need not weaken one’s pride and prejudices against others. Contributing to the recent boom of philosophical and psychological work on intellectual humility, our paper “Intellectually Humble, but Prejudiced People. A Paradox of Intellectual Virtue” presents four empirical studies that clarify the relationship between intellectual humility and prejudice. We find support for three conclusions.

First, people are prejudiced towards groups perceived as dissimilar. If I perceive you belong to a group very different from the type of human I happen to be, then I will probably dislike you, even if I know nothing about you.


Lieke
Lieke

Second, intellectual humility weakens the association between perceived dissimilarity and prejudice. This suggests that intellectual humility helps break the link (or at least weaken the link) between seeing a group as dissimilar and prejudice. Aphrahat could use such a finding as an empirical basis for his account of humility.

Third, and paradoxically, more intellectual humility is associated with more prejudice overall. When looking across groups who are perceived as most similar to most dissimilar people with more intellectual humility express more prejudice towards these groups than people with less intellectual humility. Intellectual humility might make prejudice more severe! Such a finding might help Hume ground his view empirically.


Kevin
Kevin

When juxtaposed, these three conclusions suggest that we should not think about Aphrahat vs. Hume, but rather Aphrahat and Hume. Both are identifying some truth of intellectual humility. From our studies, we believe that whether and how intellectual humility emerges and a virtue or a vice likely depends on the situation or the target of judgment.

Evaluating when and to what extent intellectual humility promotes (or hinders) the attainment of certain epistemic goods requires we clarify its mechanism, its causal relationships with other beliefs, attitudes and traits, and the functions it performs across different situations.


Zhasmina
Zhasmina

Tuesday, 15 September 2020

Belief's Minimal Rationality

Today's post is by Marianna Bergamaschi Ganapini who talks about a recently published article in Philosophical Studies.




In this paper, I defend the claim that a mental attitude is a belief if it shows at least a minimal degree of doxastic rationality. More specifically, beliefs are minimally rational in the sense that they respond to perceived irrationality by re-establishing internal coherence (or at least by clearly attempting to do so).

A traditional view in philosophy is that it is a necessary condition for being a belief that an attitude behaves largely in a rational way (I call this view “Traditionalism”). That it, belief is typically (i) an attitude sensitive to the relevant evidence. It is also (ii) inferentially connected with other beliefs and other mental attitudes (e.g. emotions), and (iii) it typically causes actions when linked to a relevant desire.

Contrary to this view, there is now strong empirical evidence that that some of the attitudes that we comfortably call ‘beliefs’ show signs of doxastic irrationality: they are at times behaviorally inert, immune to counter-evidence, or inferentially compartmentalized. The Traditionalist approach would definitely exclude these attitudes from the category ‘belief’.

This has led some philosophers to reject Traditionalism and propose a much more lenient approach (Bortolotti, 2010). On this “Revisionist” view, for attitude A to be a belief, it suffices that A is expressed through a sincere assertion. That means that for the Revisionist, it is not expected that beliefs will behave in a mostly rational way: some beliefs show the same immunity to evidence and behavioral inertness we see with other attitudes such as imagination and acceptance.

Unfortunately, using sincere assertion as the mark of belief, is not the enough to uniquely individuate beliefs: people sincerely assert that p even when they mistakenly take themselves to believe that p. The Revisionist proposal is thus at risk of inflating the category ‘belief’ to include attitudes that are not in fact beliefs.

 

The view I propose in the paper is a midway position between Traditionalism and Revisionism.

On this view, irrational, inert, and patchy beliefs are possible, but belief’s key marker is exemplified in its reaction to irrationality. This reaction shows that belief is at least minimally rational in the following ways. One: minimal rational constrains will aim at establishing internal coherence (e.g. by eliminating one of the conflicting attitudes). Second: this kind of rational constrains will be applied when doxastic irrationality is detected.

In the paper, I refer to empirical evidence supporting my view while also showing that it is compatible with different kinds of mental architectures. This approach also offers a plausible picture of what makes belief unique: when belief’s fragmentation and irrationality are revealed, there will be a contrary push for coherence.

In contrast, attitudes such as imagination are governed by a decoupling mechanism that allows our imaginative episodes to be compartmentalized (Bayne 2010; Leslie 1987). Compartmentalization in the case of imagination (supposition, or acceptance) allows us to engage in complicated hypothetical reasoning by keeping track of multiple possible scenarios at the same time. In contrast, belief’s push for coherence applies across the board and tries to prevent the kind of fragmentation we see with imagination and the rest (Currie & Ravenscroft 2002).

Tuesday, 8 September 2020

Delusion Formation as an Inevitable Consequence of a Radical Alteration in Lived Experience

This post is by Rachel Gunn summarising an article co-authored with Michael Larkin and published in Psychosis. The article is based on Rachel's PhD work at Birmingham. The research findings and conclusions, framed in terms of the Enactive Approach, are a move towards a need for understanding the phenomenology of a person’s experience in terms of sense-making within a person-environment system.


Rachel Gunn


A person ordinarily understands and negotiates the world based on familiar patterns derived from her cultural and historical experience. She is born into a family, the family consists of particular relationships and the family lives within a relatively circumscribed culture. Humans are flexible and adaptive. The difference between the lived experience of a hunter-gatherer in the Amazon Rainforest and an investment banker in the City of London highlights this flexibility. 

 There might be circumstances under which an alteration in a person’s lived experience would be so radical that rapid adjustment is not possible (Parnas & Handest, 2003) and when someone becomes mentally ill, there are many factors that might contribute to this (see Bentall, 2016 for an extensive list of candidates). People are dynamic and complex, and in most cases, so are the factors that causally contribute to their psychological distress.


Drawing on interviews conducted with people experiencing clinically significant delusions and analysing the data using Interpretative Phenomenological Analysis (IPA), we show how this alteration in lived experience manifests as emotional, affective and/or perceptual anomalies. 

Writing about IPA Smith and colleagues use Heidegger’s notion of appearing and liken interpretation to a kind of detective work where the researcher is mining the material for possible meanings, thus allowing the phenomenon of interest to shine forth (Smith et al., 2009, p. 35). The double hermeneutic means that the researcher is always trying to make sense of the participant, who is in turn trying to make sense of what is happening in the context of her lifeworld as an embodied, situated person.

For example, one research participant (who I have called Andrew) was severely bullied at work and described it thus:

It’s that awful. You’ve seen the original ‘Planet of the Apes’… film, 1964 I think it is with Charlton Heston… and you know how he’s treated during it? Management treat you the… similar to that. That’s how it felt.

(In the film ‘Planet of the Apes’ human beings are treated like animals, used for slave labour, kept in cages and experiments are done on them.)

This constituted a radical alteration in his experience and his world-view. He had never experienced anything like this before and could not really understand what was happening or why it was happening to him. He went on to develop voice-hearing experiences which he described as God talking to him – these experiences assured him that justice would be done and perhaps prevented him from feeling utterly powerless in an impossibly difficult and distressing work situation.

The framework of the Enactive Approach posits that a person interacts with her environment in terms of sense-making and a vast array of factors (biological, psychological and environmental) are intermeshed to create a ‘person-environment system’. A person is not a discrete object; persons are comprised of bodies, stories, concepts, origins, commitments, connections, affordances - and so on - and are constantly reacting with their environment. 

Cognition emerges from the complex mereology of these many components. The mereology of the cognising (person-environment) system supposes that the parts (which include relational and environmental parts as well as bodily (person-level) parts) are arranged in a particular way, and that the relationship of the parts to each other is vital for the function of the whole (Varela et al., 1991).

From a clinical perspective, this demands an attempt to understand the phenomenology of the experience.

In this context the focus of treatment might then be directed towards affective, perceptual and emotional aspects of a person’s lived experience as well as environmental and relational factors. To think of delusion simply in terms of ‘false beliefs’ which are ‘firmly sustained despite… obvious proof or evidence to the contrary…’ (DSM-V, American Psychiatric Association, 2013, p. 819) is to limit them to cognitive anomalies, over-simplify the experience and deny the meaning an experience might hold for a given individual. 

There is a wide literature on the nature of stigma in mental illness (see for example Canadian Health Services Research Foundation, 2013; Mehta & Farina, 1997) and framing delusion formation in this way helps us to reduce stigma: how can a person who is doing her best to make sense of her world be ‘at fault’ or ‘bad’? 

We suggest that if the alteration in lived experience is sufficiently radical, then delusion formation is inevitable. A person strives for sense-making in whatever environment she finds herself. We are all susceptible to this possibility. 


Tuesday, 1 September 2020

Delusions as Hetero-Dynamic Property Clusters

Today's post is by Shelby Clipp. If you want to know more, check her thesis, "Delusions as Hetero-Dynamic Property Clusters."



The standard position about the nature of delusions is doxasticism, according to which delusions are best characterized as a type of belief. However, the features of clinical delusions often differ from those typically associated with belief. For example, delusions tend to be highly resistant to counterevidence; and unlike typical beliefs they tend to exhibit limited or inconsistent behavioral guidance.

The discrepancy between the features of delusions and ‘normal’ beliefs has inspired an ongoing debate between doxasticists: those who take delusions to be beliefs and non-doxasticists: those who take delusions to be instances of some other kind of state, such as imaginings or acceptances. In my thesis, “Delusions as Hetero-Dynamic Property Clusters,” I refer to this debate as the doxastic status debate. Despite efforts, the doxastic status debate remains unresolved.

I’ve argued that this is in part because as the debate stands, it fits the bill for what Chalmers’s (2011) has described as a largely verbal dispute -- a dispute in which two parties agree on all of the relevant facts about a domain in question yet disagree on whether a certain term applies to a particular object of that domain. I attempt to advance the debate into more substantive territory, by putting forward the hetero-dynamic property cluster (HDPC) model, a new descriptive model for characterizing delusions.

I develop the HDPC model against the background of Boyd’s (1991) homeostatic property cluster (HPC) approach to categories, according to which category membership is determined by a cluster of features which tend to mutually reinforce each other. On this view, states such as beliefs and imaginings are each HPCs. An example of such reinforcement is the way in which the responsiveness to evidence of beliefs tends to enable it to guide behavior productively.

However, on the HDPC model, delusions are best understood as mental states characterized by an odd and unstable cluster of features: Odd, insofar as unlike with attitudes such as beliefs and imaginings, the combinations of features characteristic of delusions tend to resist one another rather than reinforce. For instance, unlike with beliefs which tend to be revised in light of counterevidence, an individual with Capgras delusion might cling to the assertion that their spouse is an impostor despite ample counterevidence, and this resistance to counterevidence subsequently creates problems for letting that state guide behavior -- e.g., one won’t find their ‘actual’ wife by searching for her.

By not reinforcing one another, the features of delusions are unstable in that there’s nothing to hold them together as a cluster, so to speak. This instability essentially loosens restrictions on the mental state’s movement both within the property space of a particular attitude and among one property space to another, thereby allowing the features of an HDPC attitude to freely waver between distinct property clusters. To some extent then, delusion’s odd and unstable nature has served as a catalysis to the doxastic status debate in that it inexorably contributes to delusion’s failure to resolve neatly into any one kind of more familiar cognitive attitude.


Tuesday, 25 August 2020

Insights into the Workings of an Epistemic Frame Trap

Today’s blog post by Marion Nao adds a discourse analytic perspective to imperfect cognition via Goffman’s sociological theory of frame trap. It presents some key insights from a recent paper in Language and Communication, entitled:  'The lady doth protest too much, methinks': Truth negating implications and effects of an epistemic frame trap.

Marion Nao holds a PhD in Language and Communication Research from Cardiff University, UK, and currently teaches online for Universitat Oberta de Catalunya, Spain.



 

Many of us may be uncomfortably familiar with the concept and experience of a double-bind or Catch-22 situation, in which, crudely put, you’re damned if you do and damned if you don’t. Add to the complex a discursive mechanism by which the more you do, the more damned you are, in anticipation of which being damned if you don’t might seem like the lesser of the two evils, and you likely have the workings of a frame trap. In short, and metaphorically, with increased resistance, you tighten your own noose. So, how does frame trap work, and why might it be relevant to biased beliefs?

In a recent paper, I explored this concept, which can be attributed to Goffman (Frame Analysis, Harper and Row, 1974), in relation to one specific expression: “The lady/thou doth protest too much, methinks” (Nao, 2020). This I classed as an epistemic frame trap, on the basis that it invalidates the very truth of its recipient, who is guilty of untruth whether or not they protest their innocence in response—and all the more so, the more they do. Consequently, it can be said to operate by a mechanism of recursive truth negation.

 

As a formulaic expression used with a high level of fixity (and only minor variations in formulation), its meaning is figurative in representing more than the literal sum of its parts. As such, it conveys a good deal of pre-packaged meaning in a conventionally applied way, supporting habitually unexamined assumptions about the way the world works and what it is we are going about doing in it. A crucial component of such formulaicity is ‘protest too much’, which indicates an excess of protestation. Problematically, such excess is itself presumed to be evidence of untruth.

 

From online users’ definitions of what the expression means, we can see that excess remains ambiguous with regard to whether it refers to either the amount of speech or its emotive force, or both. Undifferentiated in its evidential basis, it is questionable whether excess is at all measurable, and if so, to what extent any criteria might be judiciously applied given its formulaicity of meaning and axiomaticity of use, not to mention the imperfect cognition of its user.

Core to the operation of the frame trap is the conversational expectation of denial in the face of an unjust accusation, without which it is taken to be true. Yet, the mechanism of recursive truth negation means that a heightened emotional response, or indeed any response at all, is likewise and greater evidence of guilt. As a tool, whether consciously used or not, the expression thus offers its user considerable scope to maintain their own set of beliefs about the world and the people in it to the disadvantage of another’s.

Tuesday, 18 August 2020

Delusion-Like Beliefs: Epistemic and Forensic Innocence?

Today's post is by Joe Pierre, Acting Chief at the Mental Health Community Care Systems, VA Greater Los Angeles Healthcare System, and Health Sciences Clinical Professor in the Department of Psychiatry & Biobehavioral Sciences at the David Geffen School of Medicine at UCLA.




The blurry line separating psychopathology and normality, in the real world and the DSM, has been a longtime interest. Twenty years ago, I attempted to disentangle religious and delusional beliefs using the “continuum” model of delusional thinking based on cognitive dimensions. More recently, I’ve tried to understand other “delusion-like beliefs” (DLBs) including conspiracy theories, a frequent topic of my blog, Psych Unseen. A forthcoming paper models belief in conspiracy theories as a “two component, socio-epistemic” process involving epistemic mistrust and biased misinformation processing.

Delusions and DLBs remain challenging to distinguish in clinical practice and in the internet era where fringe beliefs are often validated. Continuum models can be helpful, along with some categorical guidelines. Delusional beliefs are false; DLBs may not be. Delusions are usually idiosyncratic/unshared, based on subjective experience, and self-referential; DLBs usually aren’t. On the contrary, DLBs are typically based on learned misinformation if not deliberate disinformation.

In forensics, the distinction between delusions and DLBs can be crucial. Mass murderer Anders Breivik nearly eluded criminal conviction based on how Norwegian law treats psychosis as an exculpatory factor (see Dr. Bortolotti et al’s nuanced account). For prosecutors and expert witnesses supporting their cause, the potential exculpatory role of DLBs therefore presents a sizeable headache. Consequently, a group of forensic psychiatrists led by Dr. Tahir Rahman has proposed a new diagnostic category called “extreme overvalued beliefs” to describe DLBs that they claim are easily differentiated from delusions:

An extreme overvalued belief is one that is shared by others in a person's cultural, religious, or subcultural group. The belief is often relished, amplified, and defended by the possessor of the belief… The individual has an intense emotional commitment to the belief and may carry out violent behavior in its service.


Although I agree that DLBs deserve to be separated from delusions, and that DSM-5 doesn’t help much, I’m a critic of “extreme overvalued beliefs” as a solution for several reasons (see here and here for more details):

First, diagnosing “extreme overvalued beliefs” isn’t nearly as easy as is claimed.

Second, DLBs shouldn’t be “swept under the rug” of a new psychiatric umbrella term. A fuller understanding would benefit from integrating established concepts from psychology (e.g. conspiracy theories), sociology and political science (e.g. terrorist “extremism”), and information science (e.g. belief in misinformation).

Third, “extremism” in overvalued beliefs is defined by criminal behavior, not on dimensional features of the belief itself, leaving unresolved why some commit violent acts in the service of DLBs, but most don’t.

And finally, the conceptualization has a concerning prosecutorial bias, seemingly in the service of thwarting defense efforts to claim incapacity and argue for sentence mitigation due to DLBs. Since many DLBs are learned or indoctrinated, a more nuanced view might see them through a lens of not only epistemic innocence, but potential forensic innocence as well. In a world where misinformation and disinformation now runs rampant, we should at least consider that the distributed responsibility for DLBs and their occasional forensic impact extends beyond the individual.


Tuesday, 11 August 2020

Ecumenical Naturalism

Today's post is by Robert N. McCauley, William Rand Kenan Jr. University Professor at the Center for Mind, Brain, and Culture at Emory University and George Graham, Professor of Philosophy at Georgia State University.


Robert N. McCauley


Our book, Hearing Voices and Other Matters of the Mind, promotes a naturalistic approach, which we call Ecumenical Naturalism, to accounting for the long recognized and striking cognitive continuities that underlie familiar features of religiosity, of mental disorders, and of everyday thinking and action.

The case for those continuities rests on two considerations. The first is empirical findings that mental phenomena (e.g., hearing voices) associated with mental disorders are more widespread than typically assumed. The second consideration concerns those continuities’ grounding in one sort of intuitive, unconscious, automatic, instantaneous (System 1) cognition, viz., maturationally natural cognition (MATNAT). MATNAT systems address a host of cognitive tasks that are critical to individuals’ survival and that have nothing to do either with one another or with religion. MATNAT systems manage fundamental problems of human survival -- handling such matters as hazard precautions, agency detection, language processing, and theory of mind (to name but a few). The associated knowledge and skills, which recur in human minds across cultures, are not taught and appear independent of general intelligence.




The by-product theory in the cognitive science of religions contends that much religious thought and behavior can be explained in terms of the cultural activation of MATNAT systems. Religions’ representations cue these systems’ operations and, in doing so, they sometimes elicit responses that mimic features of cognition and conduct associated with mental disorders. The book looks at three disorder-specific illustrations.

One occurs both in schizophrenia and in religions when people purport to hear voices of agents other than themselves, even though the experiences consist of their own inner speech and no other speaker is present. Appealing to a collection of MATNAT systems, including source monitoring, agency detection, linguistic processing, and theory of mind, the book provides an account of the perceived alien (non-self) source of a voice – distinguishing between what is experienced (one’s own silent speech) and how and when it is experienced as a voice of another agent (such as God).

A second disorder is a type of depression sometimes called a dark night of the soul, in which the inability of depressed participants to communicate with or sense their religions’ powerful, caring gods can exacerbate their depression. It is associated with prayers of petition that are perceived to be unanswered. Understanding the depression requires an exploration of cognitive systems (such as agency detection and theory of mind) at work in linguistic communication, but in which God is conceived as a displeased or indifferent listener.


George Graham

Third, by way of their rituals and pronouncements about moral thought-action fusion (TAF), i.e., the position that untoward thoughts are fully comparable morally to untoward actions, religions often can domesticate the concerns and compulsions of people with OCD. This peculiarly religio-moral exemplification of OCD is known as “scrupulosity.” Even more to the point, though, religious rituals and claims for moral TAF evoke, at least temporarily, similar obsessions and compulsions in the general population.

We contend that an exception (Autistic Spectrum Disorder (ASD)) helps prove the rule (the by-product theory). Exceptions only prove rules or support theoretical principles when those principles explain why the exception is exceptional. The earlier examples show how religions utilize cultural materials to elicit and frame experiences that excite the same cognitive apparatus that is implicated spontaneously in the corresponding mental disorders. By contrast, the cognitive impairments associated with ASD concerning theory of mind should, correspondingly, suggest constraints on religious understanding and inferential abilities among this population. We expect people with ASD as a population to prove exceptional on some fronts regarding some salient dimensions of religious cognition. Such negative findings here, then, do support the by-product theory.

Ecumenical Naturalism’s approach to mental abnormalities and religiosity promises both explanatory and therapeutic understanding. The book closes with a discussion of the therapeutic positive applicability of its theses about cognitive systems to disorders of religious significance.

Tuesday, 4 August 2020

Delusions and Theories of Belief

This post is by Michael Connors and Peter Halligan. Here they discuss their recent paper entitled 'Delusions and theories of belief' that was published in Consciousness and Cognition. Michael Connors is a research associate in the Centre for Healthy Brain Ageing at the University of New South Wales. Peter Halligan is an honorary professor in the School of Psychology at Cardiff University. 


Michael Connors

One approach to understanding cognitive processes is through the systemic study of its deficits. Known as cognitive neuropsychology, the study of selective deficits arising from brain damage has provided a productive way of identifying underlying cognitive processes in many well-circumscribed abilities, such as reading, perception, attention, and memory.


Peter Halligan


The application of these methods to higher-level processes has been more contentious. Known as cognitive neuropsychiatry, researchers over the past 30 years have applied similar methods to studying delusions – widely considered to be pathologies of belief. While providing some insights into the cognitive nature of delusions, the approach has still to address its reciprocal goal of informing accounts of normal belief.

This limitation is significant: As Marshall and Halligan noted in Method in Madness (1996), a unified theory of delusions is unlikely without an account of normal belief formation.

In a recent paper, we examine some of the reasons for this lack of progress and suggest a way forward for overcoming these challenges (Connors & Halligan, 2020).


Challenges


From the outset, there are important differences between the two domains of study. Delusions are defined against a background of social norms and values; encompass broad aspects of experience; involve excessive functioning; and are more likely to vary over time compared to the more value-free, encapsulated, and stable deficits studied in cognitive neuropsychology (David, 1993).

In addition, the assumptions of cognitive neuropsychology may not hold in this new domain. There are four assumptions and each can be problematic.

Central to the cognitive neuropsychology approach, is the concept of modularity – the idea that cognitive processes can be decomposed into specific, relatively autonomous subcomponents. This may not apply to beliefs, which integrate the outputs of several distinct modular systems across different domains and so are not easily decomposed.

Damage to cognitive systems may not be transparent to researchers – patients may conceal beliefs for social reasons in a way that is not possible with lower-level cognitive processes.

Cognitive processes in belief formation are unlikely to be selectively impaired without impacting other processes. Many delusions occur without identifiable brain lesions and new beliefs are likely to bias lower-level cognitive processing, including perception and memory, so as to be consistent with the beliefs. New beliefs may similarly engender related supporting beliefs, producing more widespread changes in the cognitive system.

Finally, generalising between patients may be problematic if pre-existing individual differences, including premorbid beliefs, are not considered.


Current Theories of Delusions


These issues are important as a leading theory of delusions – the two factor account – is based in cognitive neuropsychology (Coltheart et al., 2011).

The theory is derived from cases of monothematic delusions, such as Capgras (the belief that a familiar person had been replaced by an imposter). Several patients with this delusion show impaired autonomic responses to familiar faces – a deficit that could account for the delusion’s content (Factor 1). There are, however, patients with this deficit but without the delusion, which gave rise for the need to posit a second factor – a deficit in belief evaluation.

This dissociation between symptoms does not provide definitive support for a second factor. There is no independent evidence of a second factor and other differences are possible between the two groups. More fundamentally, given uncertainty about underlying assumptions, it is not clear that the logic of dissociations can be applied.

Importantly, predictive coding accounts do not currently provide an alternative at a cognitive level. These accounts are aimed at a broader level of explanation and attempt to relate more general patterns in cognition to neurophysiology, rather than offering a specifically cognitive account (Corlett et al., 2016).


A Possible Way Forward


Connors and Halligan (2015) argued that it is possible to outline five broad stages of belief formation at a cognitive level independent of modularity and other assumptions of cognitive neuropsychology.

Beliefs are likely to arise in response to a precursor, a distal trigger of the belief’s content. This may involve, for example, unexpected sensory input or communication from others.

Between the precursor and the belief, at least two intermediate stages need to be accounted for: firstly, how meaning is ascribed to the precursor and, secondly, how such meaning is evaluated and screened.

Once a belief is formed, a fifth stage is the effect the belief has on experience and other cognitive processes. This also includes effects on earlier stages of belief formation by shaping what precursors are attended to, how they are interpreted, and how competing hypotheses are evaluated.




While admittedly still underspecified, the account has the benefit of being parsimonious, yet flexible enough to begin to account for the heterogeneity of beliefs in both the general population and people with delusions.

We believe that this account has sufficient detail to guide future research and address limitations in existing cognitive theories of delusions. Given the unique properties of belief, we also suggest that there is a need to widen and adapt research methods and offer specific proposals in our paper.

We consider that such an approach, whilst attempting to relate pathology to a model of normal function, may help cognitive neuropsychiatry reach its original goals and offer insight into both delusional and nonpathological belief.

Tuesday, 28 July 2020

“If this account is true it is most enormously wonderful”

Today's post is by Sacha Altay, Emma de Araujo and Hugo Mercier.

Sacha Altay is doing his PHD thesis at the Jean Nicod Institute on misinformation from a cognitive and evolutionary perspective. Emma de Araujo is a master student in Sociology doing an internship at the Jean Nicod Institute in the Evolution and Social Cognition Team. Hugo Mercier is a research scientist at the CNRS (Jean Nicod Institute) working on argumentation and how we evaluate communicated information. 


Why do people share fake news? We believe others to be more easily swayed by fake news than us (Corbu et al., 2020; Jang & Kim, 2018), thus an intuitive explanation is that people share fake news because they are gullible. It makes sense that if people can’t tell truths from falsehoods, they will inadvertently share falsehoods often enough. In fact, laypeople are quite good at detecting fake news (Pennycook et al., 2019, 2020; Pennycook & Rand, 2019) and, more generally, they don’t get fooled easily (Mercier, 2020). However, despite this ability to spot fake news, people do share some news they suspect to be inaccurate (Pennycook et al., 2019, 2020). Why would they do that?


 

One explanation is that people share inaccurate information by mistake, because they are lazy or distracted (Pennycook et al., 2019; Pennycook & Rand, 2019). Indeed, a rational mind should only share accurate information, right? Not so fast. First, laypeople are not professional journalists, they share news for a variety of reasons, such as bonding with peers or having a laugh. Second, even when one’s goal is to inform others, accuracy alone is not enough.

How informative do you find the following (true) news “This morning a pigeon attacked my plants”? Now consider these fake news stories “COVID-19 is a bioweapon released by China” and “Drinking alcohol protects from COVID-19.” If true, the first one might start another world war, and the second one would end the pandemic in a massive and unprecedent international booze-up. Despite being implausible, as long as one is not entirely sure that such fake news is inaccurate, it has some relevance and sharing value.



In a recent article, we tested whether, as suggested above, the “interestingness-if-true” of a piece of information can in part make up for its questionable accuracy. In two pre-registered experiments, we had 600 online participants rating how willing they would be to share a series of true and fake news, and rate how accurate and interesting-if-true the pieces of news were. Participants were more willing to share news they found interesting-if-true, or accurate.


In addition, fake news was deemed much less accurate than true news but also more interesting-if-true.

 

 


Our results suggest that people may not share fake news because they are gullible, distracted or lazy, but instead because fake news has qualities that make up for its relative inaccuracy, such as being more interesting-if-true. Yet people likely share news they think might not be accurate for a nexus of reasons beside its interestingness-if-true (see, e.g., Kümpel et al., 2015; Petersen et al., 2018; Shin & Thorson, 2017). For instance, older adults share more fake news than younger adults despite being better than them at detecting fake news, probably because “older adults often prioritize interpersonal goals over accuracy” (Brashier & Schacter, 2020, p. 4).

In the end, we should keep in mind that (i) being accurate is not the same thing as being interesting: accuracy is only one part of relevance; (ii) sharing is not the same thing as believing, people share things they don’t necessarily hold to be true; (iii) sharing information is a social behavior motivated by a myriad of factors, and informing others is only one of them.

Tuesday, 21 July 2020

How to tackle discomfort when confronting implicit biases

Ditte Marie Munch-Jurisic (Photo ©Dorte_Jelstrup)

Today's post is by Ditte Marie Munch-Jurisic, who is a postdoc at the Section for Philosophy and Science Studies at Roskilde University, Denmark.


It has become quite trendy to argue that that it is okay (or maybe even required) to make people feel uncomfortable because of their biases or prejudices. In my new paper in Ethical Theory and Moral Practice, The Right to Feel Comfortable: Implicit Bias and the Moral Potential of Discomfort, I discuss this new trend by arguing that there are good reasons (from affective neuroscience) for why we should curb our enthusiasm when it comes to the moral potential of discomfort. It certainly can be justified to call people out for their biased behavior and we need not comfort every display of what I call “awareness discomfort”. But in such situations we shouldn’t expect to be changing the receiver’s moral mindset. 

This is the first out of two papers on the feelings of discomfort generated by implicit biases and other subtle forms of discrimination (coming out of my research project funded by the Carlsberg Foundation). In another upcoming paper (Against Comfort: Political Implications of Evading Discomfort) I am more in favor of discomfort and argue that we should be ready to accept more of this kind of bias discomfort if we want to advance social mobility.

From psychologists advising on so-called implicit bias trainings to comedian Hannah Gadsby’s special, Nanette, which challenged conventions of stand-up comedy, speakers are increasingly confronting audiences with their complicity in structural forms of discrimination. Despite widespread scholarly interest in the moral potential of discomfort, there has been surprisingly little discussion of its potential pitfalls. Although discomfort advocates range from killjoys who endorse intentional and direct confrontation to more moderate voices who carve out careful distinctions between productive and unproductive forms discomfort, only a few voices have directly called attention to the aversive effects of discomfort.

Such discomfort skeptics warn that, because people often react negatively to feeling blamed or called-out, the result of confrontational approaches is often counterproductive. To deepen this critique, I draw in the paper on the recent upsurge of research on negative affect and emotions in the affective sciences and philosophy of emotion to argue for a contextual understanding of discomfort that accounts for the complex phenomenology of aversive affect.

My primary aim is to caution against the current wave of discomfort advocacy. Advocates risk overrating the moral potential of discomfort if they underestimates the extent to which context shapes the interpretation of affect and simple, raw feelings. Context in this sense entails two dimensions: (i) the concrete situation of individual agents and (ii) the internal tools and concepts they use to interpret their discomfort. Rudimentary affect like discomfort does not necessarily have a transparent, straightforward intentionality.

Put simply, agents may not know precisely why they feel uncomfortable. Their specific situations and the interpretative tools they use to discern their discomfort are central to how they will understand their discomfort and the motivations they will draw from the experience. Affect—and especially negative affect like discomfort—has an paramount and often unpredictable influence on our judgments, behavior and understanding of the world. From the perspective of the contextual approach, a critical problem for discomfort advocates is that they risk ignoring the multiple kinds of discomfort that may arise in discussions of implicit bias.

Tuesday, 14 July 2020

Implicit Bias: Knowledge, Justice, and the Social Mind

Today's post is by Alex Madva (California Center for Ethics & Policy, California State Polytechnic University) who is introducing a new book co-edited with Erin Beeghly (University of Utah), entitled An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge 2020).


Alex Madva

In the wake of what might be the largest protests in American history responding to police and vigilante brutality against the black community, the point – or pointlessness – of “Implicit Bias Training” has taken on renewed urgency. Although I do implicit bias training myself, my co-editor Erin Beeghly and I share critics’ concerns: the trainings are “too short, too simplistic,” and too often function just to let organizations “check a box” to protect against litigation, rather than spark real change. 


Erin Beeghly


But “training” is just another word for “education,” and all kinds of education can be done well or poorly. If implicit bias is one important piece of a large and complex puzzle, then education about it—when done right—has a meaningful role to play in helping us understand problems and point toward solutions.

So what is implicit bias? How does it compromise our knowledge of others? How does it affect us, not just as individuals but as participants in larger social institutions? And what can we do to combat it? An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind engages these questions in non-technical terms. Each chapter includes discussion questions and annotated recommendations for further reading, and a companion webpage links to further teaching resources (podcasts, films, online activities…)





We are especially proud of the range of social and philosophical perspectives represented by our authors (chapter summaries here), many of whom are coming soon to a blogosphere near you…

Next week over at PEA Soup:

Then head over on the following week (July 27th-30th) to the Brains Blog:

Also note that many chapters address policing and criminal justice, from summarizing findings that black men are perceived to be more threatening than (similarly sized) white men, to asking when it’s “reasonable” for officers to use lethal force, to drawing lessons for individual and structural interventions from the M4BL platform, to teaching in prisons.

But I’ll close with three specific examples from authors who won’t be blogging this go-round…

In “Skepticism about Bias,” Michael Brownstein considers arguments that apparent inequalities in criminal justice don’t really exist; or that they exist but aren’t unjust; or that they’re unjust but aren’t explained by implicit bias.

In “Stereotype Threat, Identity, and the Disruption of Habit,” Nathifa Greene calls for overhauling our understanding of stereotype threat, beyond studies on student test-taking, which replicate inconsistently at best. Instead, on Greene’s alternative view, “The increased self-consciousness of stereotype threat” is operative “when police, security guards, and vigilante citizens surveil and monitor activities like shopping… with the deadly risk of encounters with police as passersby stereotyped someone who was sleeping, socializing, or playing while black.

Finally, in “Explaining Injustice: Structural Analysis, Bias, and Individuals,” Saray Ayala-López and Erin Beeghly explore the complex intertwining of structural injustice and individual bias—including how college campuses with prominent Confederate monuments have higher levels of implicit bias. So, yeah, in case you needed it, there’s another reason to tear them all down.

Tuesday, 7 July 2020

Unimpaired Abduction to Alien Abduction

Today’s post is by Ema Sullivan-Bissett, who is a Senior Lecturer in Philosophy at the University of Birmingham. Here she overviews her paper ‘Unimpaired Abduction to Alien Abduction: Lessons on Delusion Formation’, recently published in Philosophical Psychology. Last year, when millions of people had marked themselves as attending a storming of Area 51, Ema also wrote about her research for the Birmingham Perspective.


In the academic year 2013–14, I was a Postdoctoral Research Fellow on Lisa Bortolotti’s AHRC project on the Epistemic Innocence of Imperfect Cognitions. Towards the end of the Project, I was extremely fortunate to have the opportunity to be a Visiting Researcher at Macquarie University’s ARC Centre of Excellence in Cognition and Its Disorders.

Professor John Sutton hosted me for that month, but I was also lucky to spend some time with Professor Max Coltheart, and interviewed him for this blog. In the first part of the interview we talked about delusion formation, and in the second, we talked about alien abduction belief. I had thought a bit about this phenomenon prompted by some comments in Max’s paper ‘Delusional Belief’ (2011), and talking to Max about this really helped me get clear on what I wanted to say about it.

What resulted was my paper ‘Unimpaired Abduction to Alien Abduction: Lessons on Delusion Formation’, in which I argue that there is much to learn about this particular case of bizarre belief for what we should say about how monothematic delusions are formed, specifically, that the one-factor account ought to be the way we approach explaining monothematic delusion formation.


Ema Sullivan-Bissett


So what is alien abduction belief? Well, many people believe that they have been abducted by aliens. Often, they believe that particular events occurred as a part of this overall experience. For example, some abductees believe that they were taken aboard spaceships, subject to medical experimentation, formed sexual relationships and produced hybrid offspring with aliens, or received important information about the fate of the Earth. Clearly, this is a bizarre belief, but what does it have to do with monothematic delusions more generally?

To answer that question we need to summarise the state of play regarding the debate between one- and two-factor theorists of delusion formation. Psychologists and cognitive neuroscientists have argued that subjects with monothematic delusions have anomalous experiences in which these beliefs are rooted, but few (with the exception of Brendan Maher) take these strange experiences to be the only clinical factor. This is the one-factor approach.

The more popular view has it that there is a second clinical factor such as a cognitive deficit, bias, or performance error. This view is motivated—at least in part—by the fact that there are some subjects who have the anomalous experience associated with a delusion of a particular kind (e.g. the lack of affective response to faces typical of Capgras delusion), but do not themselves have the delusional belief (i.e. that their loved one has been replaced by an imposter). Two-factor theorists claim that a second factor is needed to explain why the anomalous experience prompts a delusional explanation in only some cases.

Alien abduction belief is interesting because theorists seeking to explain why people believe that they have been abducted by aliens have not sought to identify a cognitive abnormality, even whilst recognising that there are subjects who have experiences associated with alien abduction beliefs and who nevertheless do not believe that they were abducted by aliens. Rather, these researchers seek to identify a variety of normal-range cognitive processes that may contribute to the explanation of the generation of abduction beliefs. I argue that we have no reason to doubt the explanatory value of this research methodology, were we to equip ourselves with it, and turn to monothematic delusions more generally.

I conclude that the one-factor position ought to be the default approach for understanding delusion formation, and that those theorists interested in understanding the formation of delusional beliefs have much to learn from the case of alien abduction belief. To put it in the crudest terms, in the presence of anomalous experiences, it is normal for humans to have bizarre beliefs.

Tuesday, 30 June 2020

The Epistemic Innocence of Irrational Beliefs

Here I am briefly presenting my new book, The Epistemic Innocence of Irrational Beliefs, out today in the UK with Oxford University Press. Research culminating in this book was conducted for several projects that contributed to this blog, including project PERFECT, the Costs and Benefits of Optimism project, and the Epistemic Innocence of Imperfect Cognitions project.


Book cover

In an ideal world, our beliefs would satisfy norms of truth and rationality, as well as foster the acquisition, retention, and use of other relevant information. In reality, we have limited cognitive capacities and are subject to motivational biases on an everyday basis.

We may also experience impairments in perception, memory, learning, and reasoning in the course of our lives. Such limitations and impairments give rise to distorted memory beliefs, confabulated explanations, and beliefs that are delusional and optimistically biased.

In this book, I argue that some irrational beliefs qualify as epistemically innocent, where, in some contexts, the adoption, maintenance, or reporting of the beliefs delivers significant epistemic benefits that could not be easily attained otherwise. Epistemic innocence does not imply that the epistemic benefits of the irrational belief outweigh its epistemic costs, yet it clarifies the relationship between the epistemic and psychological effects of irrational beliefs on agency. 

It is misleading to assume that epistemic rationality and psychological adaptiveness always go hand-in-hand, but also that there is a straight-forward trade-off between them. Rather, epistemic irrationality can lead to psychological adaptiveness, which in turn can support the attainment of epistemic goals. Recognising the circumstances in which irrational beliefs enhance or restore epistemic performance informs our mutual interactions and enables us to take measures to reduce their irrationality without undermining the conditions for epistemic success.

Here is a brief explanation of epistemic innocence:



Six philosophers in the Imperfect Cognitions Research Network, all researching aspects of belief and rationality, have agreed to participate in a virtual book launch for this monograph with the following video presentations:


You are warmly encouraged to watch the videos, and then leave comments and ask questions about the book to them or to me here or on Twitter using the hashtag #EpistInnocence2020.


Tuesday, 23 June 2020

The Insanity Defence without Mental Illness

Today's post is by Marko Jurjako, Assistant Professor of Philosophy at University of Rijeka, regarding the recent paper ‘The insanity defence without mental illness? Some considerations’ that he co-authored with Gerben Meynen, professor of Forensic Psychiatry (Utrecht University) and endowed professor of Ethics and Psychiatry (VU University Amsterdam) and Luca Malatesti, Associate Professor of Philosophy at the University of Rijeka. Marko and Luca’s work on this paper is an outcome of the project Responding to antisocial personalities in a democratic society RAD, that is financed by the Croatian Science Foundation.


Luca Malatesti

In the last decade there has been a resurgence of interest in the insanity defence. One of the apparent moral truisms is that a person should not be blamed for actions they are not responsible for. As an instantiation of this principle, the moral rationale for the insanity defence is to prevent unjustly punishing offenders who are not responsible due to a mental illness.

Across the Western hemisphere, formulations of the insanity defence usually involve two components. One component, that we call the incapacity clause, states that a person is not accountable if, when committing the crime, she lacked some relevant psychological capacities, such as the cognitive ability to understand the nature of her action and the ability to control her behaviour in the light of that knowledge. For instance, if due to a delusion, someone kills a person thinking that he is helping her, he is unaccountable because he did not know the nature of his action. The other component of the insanity defence, that we call the mental illness clause, requires that these incapacities are caused by a mental illness.

Gerben Meynen

Despite the common-sense view that the insanity defence presupposes the mental illness clause, legal scholars and philosophers debate whether this is the case. Some argue that the mental illness clause is not important for determining criminal responsibility because mental illness is neither sufficient nor necessary for determining whether someone should be excused for a crime. A judgment on her mental incapacity should be enough. Moreover, in recent years the Convention on the Rights of Persons with Disability (CRPD) has sparked additional discussion. According to some interpretations of the convention, not only the mental illness clause, but the insanity defence as such should be abolished because it discriminates against disabled individuals.


Marko Jurjako

In the paper, we focus our discussion on the role that mental illness clause should play within legislations that adopt some form of the insanity defence. Thus, we do not directly discuss issues raised by the adoption of the CRPD. After providing a preliminary discussion of the rationale for having the insanity defence, we focus on the proper role of the mental illness clause in it.

We aim to offer a nuanced discussion whether the mental illness clause should be retained as a component of the insanity defence. In this regard, we discuss three principal reasons why the clause is important for adjudicating cases of criminal non-responsibility. 

The first reason relates to our exculpatory practices. In some cases, the presence of a mental illness indicates an internal impairment in decision-making capacities that undermines legal culpability in a way that cannot be attributed to any other cause outside the agent. In this sense, a mental illness can provide a particular cause that explains why the agent is not responsible for her crime.

The second reason pertains to our epistemological practices and practical limitations when trying to determine the accountability of a defendant. We argue that knowing whether a defendant is suffering from a specific mental illness can be especially helpful for establishing whether the agent at the time of the act had relevant incapacities. For instance, if the defendant suffers from schizophrenia, that gives us reason to examine whether she could have committed the crime while suffering from a paranoid delusion.

The third reason pertains to the general relation between legal practice and medical psychological and scientific advancements in the study of human behaviour. We maintain that the mental illness clause keeps a close tie between the relevant sciences of the mind and the law. Thus, it enables an interactive relationship that secures the conceptual and evidentiary relations between the clinical advancements and its scientific overlays with ethically justifiable legal practices. 

For instance, future studies might confirm that a certain subgroup of individuals with antisocial personality disorder suffer from such mental/brain incapacities that their criminal actions may be the result of dysfunctionalities in their neurophysiology. This scientific evidence would give us reason to conclude that despite the appearances of ill will, a subgroup of defendants with severe forms of antisocial personality might not be accountable for some of their crimes. 

The main outcome of our discussion is that an ethically justified formulation of the insanity defence need not necessarily include an explicitly stated mental illness clause. Nonetheless, we argue that the ethically justified formulations of the insanity defence should be able to accommodate the reasons underlying the adoption of this clause. 

Thus, our main conclusion is that different legislations might serve criminal justice solely based on the incapacity defence without a formal adoption of the mental illness clause. Depending on their other safeguards, these legislations should allow, however, that mental illness plays at least an evidentiary role in the incapacity defence.