Tuesday 25 August 2020

Insights into the Workings of an Epistemic Frame Trap

Today’s blog post by Marion Nao adds a discourse analytic perspective to imperfect cognition via Goffman’s sociological theory of frame trap. It presents some key insights from a recent paper in Language and Communication, entitled:  'The lady doth protest too much, methinks': Truth negating implications and effects of an epistemic frame trap.

Marion Nao holds a PhD in Language and Communication Research from Cardiff University, UK, and currently teaches online for Universitat Oberta de Catalunya, Spain.


Many of us may be uncomfortably familiar with the concept and experience of a double-bind or Catch-22 situation, in which, crudely put, you’re damned if you do and damned if you don’t. Add to the complex a discursive mechanism by which the more you do, the more damned you are, in anticipation of which being damned if you don’t might seem like the lesser of the two evils, and you likely have the workings of a frame trap. In short, and metaphorically, with increased resistance, you tighten your own noose. So, how does frame trap work, and why might it be relevant to biased beliefs?

In a recent paper, I explored this concept, which can be attributed to Goffman (Frame Analysis, Harper and Row, 1974), in relation to one specific expression: “The lady/thou doth protest too much, methinks” (Nao, 2020). This I classed as an epistemic frame trap, on the basis that it invalidates the very truth of its recipient, who is guilty of untruth whether or not they protest their innocence in response—and all the more so, the more they do. Consequently, it can be said to operate by a mechanism of recursive truth negation.


As a formulaic expression used with a high level of fixity (and only minor variations in formulation), its meaning is figurative in representing more than the literal sum of its parts. As such, it conveys a good deal of pre-packaged meaning in a conventionally applied way, supporting habitually unexamined assumptions about the way the world works and what it is we are going about doing in it. A crucial component of such formulaicity is ‘protest too much’, which indicates an excess of protestation. Problematically, such excess is itself presumed to be evidence of untruth.


From online users’ definitions of what the expression means, we can see that excess remains ambiguous with regard to whether it refers to either the amount of speech or its emotive force, or both. Undifferentiated in its evidential basis, it is questionable whether excess is at all measurable, and if so, to what extent any criteria might be judiciously applied given its formulaicity of meaning and axiomaticity of use, not to mention the imperfect cognition of its user.

Core to the operation of the frame trap is the conversational expectation of denial in the face of an unjust accusation, without which it is taken to be true. Yet, the mechanism of recursive truth negation means that a heightened emotional response, or indeed any response at all, is likewise and greater evidence of guilt. As a tool, whether consciously used or not, the expression thus offers its user considerable scope to maintain their own set of beliefs about the world and the people in it to the disadvantage of another’s.

Tuesday 18 August 2020

Delusion-Like Beliefs: Epistemic and Forensic Innocence?

Today's post is by Joe Pierre, Acting Chief at the Mental Health Community Care Systems, VA Greater Los Angeles Healthcare System, and Health Sciences Clinical Professor in the Department of Psychiatry & Biobehavioral Sciences at the David Geffen School of Medicine at UCLA.

The blurry line separating psychopathology and normality, in the real world and the DSM, has been a longtime interest. Twenty years ago, I attempted to disentangle religious and delusional beliefs using the “continuum” model of delusional thinking based on cognitive dimensions. More recently, I’ve tried to understand other “delusion-like beliefs” (DLBs) including conspiracy theories, a frequent topic of my blog, Psych Unseen. A forthcoming paper models belief in conspiracy theories as a “two component, socio-epistemic” process involving epistemic mistrust and biased misinformation processing.

Delusions and DLBs remain challenging to distinguish in clinical practice and in the internet era where fringe beliefs are often validated. Continuum models can be helpful, along with some categorical guidelines. Delusional beliefs are false; DLBs may not be. Delusions are usually idiosyncratic/unshared, based on subjective experience, and self-referential; DLBs usually aren’t. On the contrary, DLBs are typically based on learned misinformation if not deliberate disinformation.

In forensics, the distinction between delusions and DLBs can be crucial. Mass murderer Anders Breivik nearly eluded criminal conviction based on how Norwegian law treats psychosis as an exculpatory factor (see Dr. Bortolotti et al’s nuanced account). For prosecutors and expert witnesses supporting their cause, the potential exculpatory role of DLBs therefore presents a sizeable headache. Consequently, a group of forensic psychiatrists led by Dr. Tahir Rahman has proposed a new diagnostic category called “extreme overvalued beliefs” to describe DLBs that they claim are easily differentiated from delusions:

An extreme overvalued belief is one that is shared by others in a person's cultural, religious, or subcultural group. The belief is often relished, amplified, and defended by the possessor of the belief… The individual has an intense emotional commitment to the belief and may carry out violent behavior in its service.

Although I agree that DLBs deserve to be separated from delusions, and that DSM-5 doesn’t help much, I’m a critic of “extreme overvalued beliefs” as a solution for several reasons (see here and here for more details):

First, diagnosing “extreme overvalued beliefs” isn’t nearly as easy as is claimed.

Second, DLBs shouldn’t be “swept under the rug” of a new psychiatric umbrella term. A fuller understanding would benefit from integrating established concepts from psychology (e.g. conspiracy theories), sociology and political science (e.g. terrorist “extremism”), and information science (e.g. belief in misinformation).

Third, “extremism” in overvalued beliefs is defined by criminal behavior, not on dimensional features of the belief itself, leaving unresolved why some commit violent acts in the service of DLBs, but most don’t.

And finally, the conceptualization has a concerning prosecutorial bias, seemingly in the service of thwarting defense efforts to claim incapacity and argue for sentence mitigation due to DLBs. Since many DLBs are learned or indoctrinated, a more nuanced view might see them through a lens of not only epistemic innocence, but potential forensic innocence as well. In a world where misinformation and disinformation now runs rampant, we should at least consider that the distributed responsibility for DLBs and their occasional forensic impact extends beyond the individual.

Tuesday 11 August 2020

Ecumenical Naturalism

Today's post is by Robert N. McCauley, William Rand Kenan Jr. University Professor at the Center for Mind, Brain, and Culture at Emory University and George Graham, Professor of Philosophy at Georgia State University.

Robert N. McCauley

Our book, Hearing Voices and Other Matters of the Mind, promotes a naturalistic approach, which we call Ecumenical Naturalism, to accounting for the long recognized and striking cognitive continuities that underlie familiar features of religiosity, of mental disorders, and of everyday thinking and action.

The case for those continuities rests on two considerations. The first is empirical findings that mental phenomena (e.g., hearing voices) associated with mental disorders are more widespread than typically assumed. The second consideration concerns those continuities’ grounding in one sort of intuitive, unconscious, automatic, instantaneous (System 1) cognition, viz., maturationally natural cognition (MATNAT). MATNAT systems address a host of cognitive tasks that are critical to individuals’ survival and that have nothing to do either with one another or with religion. MATNAT systems manage fundamental problems of human survival -- handling such matters as hazard precautions, agency detection, language processing, and theory of mind (to name but a few). The associated knowledge and skills, which recur in human minds across cultures, are not taught and appear independent of general intelligence.

The by-product theory in the cognitive science of religions contends that much religious thought and behavior can be explained in terms of the cultural activation of MATNAT systems. Religions’ representations cue these systems’ operations and, in doing so, they sometimes elicit responses that mimic features of cognition and conduct associated with mental disorders. The book looks at three disorder-specific illustrations.

One occurs both in schizophrenia and in religions when people purport to hear voices of agents other than themselves, even though the experiences consist of their own inner speech and no other speaker is present. Appealing to a collection of MATNAT systems, including source monitoring, agency detection, linguistic processing, and theory of mind, the book provides an account of the perceived alien (non-self) source of a voice – distinguishing between what is experienced (one’s own silent speech) and how and when it is experienced as a voice of another agent (such as God).

A second disorder is a type of depression sometimes called a dark night of the soul, in which the inability of depressed participants to communicate with or sense their religions’ powerful, caring gods can exacerbate their depression. It is associated with prayers of petition that are perceived to be unanswered. Understanding the depression requires an exploration of cognitive systems (such as agency detection and theory of mind) at work in linguistic communication, but in which God is conceived as a displeased or indifferent listener.

George Graham

Third, by way of their rituals and pronouncements about moral thought-action fusion (TAF), i.e., the position that untoward thoughts are fully comparable morally to untoward actions, religions often can domesticate the concerns and compulsions of people with OCD. This peculiarly religio-moral exemplification of OCD is known as “scrupulosity.” Even more to the point, though, religious rituals and claims for moral TAF evoke, at least temporarily, similar obsessions and compulsions in the general population.

We contend that an exception (Autistic Spectrum Disorder (ASD)) helps prove the rule (the by-product theory). Exceptions only prove rules or support theoretical principles when those principles explain why the exception is exceptional. The earlier examples show how religions utilize cultural materials to elicit and frame experiences that excite the same cognitive apparatus that is implicated spontaneously in the corresponding mental disorders. By contrast, the cognitive impairments associated with ASD concerning theory of mind should, correspondingly, suggest constraints on religious understanding and inferential abilities among this population. We expect people with ASD as a population to prove exceptional on some fronts regarding some salient dimensions of religious cognition. Such negative findings here, then, do support the by-product theory.

Ecumenical Naturalism’s approach to mental abnormalities and religiosity promises both explanatory and therapeutic understanding. The book closes with a discussion of the therapeutic positive applicability of its theses about cognitive systems to disorders of religious significance.

Tuesday 4 August 2020

Delusions and Theories of Belief

This post is by Michael Connors and Peter Halligan. Here they discuss their recent paper entitled 'Delusions and theories of belief' that was published in Consciousness and Cognition. Michael Connors is a research associate in the Centre for Healthy Brain Ageing at the University of New South Wales. Peter Halligan is an honorary professor in the School of Psychology at Cardiff University. 

Michael Connors

One approach to understanding cognitive processes is through the systemic study of its deficits. Known as cognitive neuropsychology, the study of selective deficits arising from brain damage has provided a productive way of identifying underlying cognitive processes in many well-circumscribed abilities, such as reading, perception, attention, and memory.

Peter Halligan

The application of these methods to higher-level processes has been more contentious. Known as cognitive neuropsychiatry, researchers over the past 30 years have applied similar methods to studying delusions – widely considered to be pathologies of belief. While providing some insights into the cognitive nature of delusions, the approach has still to address its reciprocal goal of informing accounts of normal belief.

This limitation is significant: As Marshall and Halligan noted in Method in Madness (1996), a unified theory of delusions is unlikely without an account of normal belief formation.

In a recent paper, we examine some of the reasons for this lack of progress and suggest a way forward for overcoming these challenges (Connors & Halligan, 2020).


From the outset, there are important differences between the two domains of study. Delusions are defined against a background of social norms and values; encompass broad aspects of experience; involve excessive functioning; and are more likely to vary over time compared to the more value-free, encapsulated, and stable deficits studied in cognitive neuropsychology (David, 1993).

In addition, the assumptions of cognitive neuropsychology may not hold in this new domain. There are four assumptions and each can be problematic.

Central to the cognitive neuropsychology approach, is the concept of modularity – the idea that cognitive processes can be decomposed into specific, relatively autonomous subcomponents. This may not apply to beliefs, which integrate the outputs of several distinct modular systems across different domains and so are not easily decomposed.

Damage to cognitive systems may not be transparent to researchers – patients may conceal beliefs for social reasons in a way that is not possible with lower-level cognitive processes.

Cognitive processes in belief formation are unlikely to be selectively impaired without impacting other processes. Many delusions occur without identifiable brain lesions and new beliefs are likely to bias lower-level cognitive processing, including perception and memory, so as to be consistent with the beliefs. New beliefs may similarly engender related supporting beliefs, producing more widespread changes in the cognitive system.

Finally, generalising between patients may be problematic if pre-existing individual differences, including premorbid beliefs, are not considered.

Current Theories of Delusions

These issues are important as a leading theory of delusions – the two factor account – is based in cognitive neuropsychology (Coltheart et al., 2011).

The theory is derived from cases of monothematic delusions, such as Capgras (the belief that a familiar person had been replaced by an imposter). Several patients with this delusion show impaired autonomic responses to familiar faces – a deficit that could account for the delusion’s content (Factor 1). There are, however, patients with this deficit but without the delusion, which gave rise for the need to posit a second factor – a deficit in belief evaluation.

This dissociation between symptoms does not provide definitive support for a second factor. There is no independent evidence of a second factor and other differences are possible between the two groups. More fundamentally, given uncertainty about underlying assumptions, it is not clear that the logic of dissociations can be applied.

Importantly, predictive coding accounts do not currently provide an alternative at a cognitive level. These accounts are aimed at a broader level of explanation and attempt to relate more general patterns in cognition to neurophysiology, rather than offering a specifically cognitive account (Corlett et al., 2016).

A Possible Way Forward

Connors and Halligan (2015) argued that it is possible to outline five broad stages of belief formation at a cognitive level independent of modularity and other assumptions of cognitive neuropsychology.

Beliefs are likely to arise in response to a precursor, a distal trigger of the belief’s content. This may involve, for example, unexpected sensory input or communication from others.

Between the precursor and the belief, at least two intermediate stages need to be accounted for: firstly, how meaning is ascribed to the precursor and, secondly, how such meaning is evaluated and screened.

Once a belief is formed, a fifth stage is the effect the belief has on experience and other cognitive processes. This also includes effects on earlier stages of belief formation by shaping what precursors are attended to, how they are interpreted, and how competing hypotheses are evaluated.

While admittedly still underspecified, the account has the benefit of being parsimonious, yet flexible enough to begin to account for the heterogeneity of beliefs in both the general population and people with delusions.

We believe that this account has sufficient detail to guide future research and address limitations in existing cognitive theories of delusions. Given the unique properties of belief, we also suggest that there is a need to widen and adapt research methods and offer specific proposals in our paper.

We consider that such an approach, whilst attempting to relate pathology to a model of normal function, may help cognitive neuropsychiatry reach its original goals and offer insight into both delusional and nonpathological belief.