Lots of people I like and respect who think about delusions have recently decided that social processes are relevant to belief formation and maintenance and thence to delusions. I call this the social turn
The preceding blog posts in this fascinating series suggest:
1) That we give testimony about the quality of other individuals as sources of testimony, and as such, we should define delusions and (given their social contents) delusions arise within individuals, through inherently social processes.
2) That testimonial abnormalities might be domain specific and dissociable from general reasoning abnormalities, and further that the socially specific deficit is one of coalitional cognition – how we form and sustain alliances with conspecifics.
3) That the brain’s statistical algorithms operate in the control centre of a unique primate that evolved to navigate a distinct (social) world of opportunities and risks and as such, any computational account of delusions should honor that social domain specificity
4) That there may be a new learning mechanism, through which jumping to conclusions – a bias widely held to be relevant to delusions - becomes contagious, jumping from individual to individual in a chain of agents drawing conclusions.
These seem like four very good reasons to take the social turn seriously. Other good reasons include the apparently social contents of delusions, as well as empirical data that seem to suggest that a domain specific coalitional mechanism is relevant to delusions [see Raihani and Bell 2018
and Bell et al. 2020
With all due respect to these authors, and in deep appreciation and admiration of their work, I would like to push back today.
1) Whether we need to posit a domain specific mechanism, when perhaps a general learning mechanism might suffice?
I want to be very clear, I acknowledge that humans are exquisitely social, and that we have specialized mechanisms for social cognition and interaction. We are influenced by the elegant work of Cecilia Heyes, who argues that much of what we call social cognition across species is actually driven by domain-general precision-weighted inference mechanisms [Heyes and Pearce 2015
]. Put simply, we learn about other people as if they were cues with a mean expected value, and a reliability [Heyes et al. 2020
] (this could be a mechanism through which we give testimony about others testimony).
Evidence for this type of view is extensive. Some of the most compelling comes from developmental work in humans. Human infants’ domain-general associative learning abilities portend their social cognition and behavior later in life [Reeb-Sutherland et al. 2012
]. I would like to suggest that much of social cognition involves ill-posed and recursive inference problems. These are hard problems. They tax the inference machinery extensively. Any insults to that inference machinery will impair social inference (as well as inferences more broadly). This would be consistent with our observations relating paranoia in patients, on the continuum, and perhaps even in rodents, to non-social precision-weighted updating [Reed et al. 2020
]. We still need to get from our non-social deficit to an extremely social belief.
Briefly, after Sullivan and colleagues, I think that having an enemy or persecutor can actually be reassuring. Perceiving that enemy as a source of misfortune increases the sense that the world is predictable and controllable, that risks are not randomly distributed [Sullivan et al. 2010
] – blaming enemies might mollify the uncertainty that characterizes high paranoia, delusions, and psychosis more broadly. In settings where a sense of control is reduced, people will compensate by attributing exaggerated influence to an enemy, even when the enemy’s influence is not obviously linked to those hazards.
To be clear again, neither I nor Prof. Heyes disavow the presence or importance of domain-specific social mechanisms to human cognition and comportment, or indeed, that there are human-specific, and extremely impactful processes of social exchange (like language, in service of communicating meta-cognitive precision for interlocution and ideally shared belief updating [Heyes et al. 2020
]). I would call these social-cognition proper.
I’d like to suggest that those inclined toward the social turn need to show that delusions are particularly related to these specific mechanisms (like theory of mind).
When social and non-social streams of information are available for inference by people who are highly paranoid, it is not clear that they have a specific problem with the social, that is not also present in handling the non-social [Suthaharan et al. 2021
, Rossi-Goldthorpe et al. 2021
In a recent meta-analysis of all functional magnetic resonance imaging studies of prediction error [Corlett et al. 2021
], my colleagues and I found that there are regions (including the striatum, midbrain, and insula) that carry prediction errors across domains (like primary rewards, perception, and social variables). However, we also found some more domain-specific prediction errors, for example we saw prediction errors climbing the visual hierarchy during visual perception.
Crucially, we found a social domain-specific prediction error in the dorsomedial prefrontal cortex (though – in something of a replication of what was found recently with direct recordings [Jamali et al. 2021
]) this signal was present in non-social tasks, albeit less so). Perhaps one way that we might adjudicate between domain-general and social-specific accounts would be to show that delusions are more related to one or the other of these circuits, and the behaviors that they underwrite.
2) How well a coalitional cognition mechanism can explain the contents of all delusions?
To be fair, this is also a problem for domain general theories, but, since the social turn is supposed to solve that problem for us, it is important to evaluate whether the social turn achieves its ends. I think it works best to explain paranoia, and, indeed the data so far have largely focused on the continuum of paranoia, rather than persecutory delusions.
Commonly, the next delusional theme mentioned by social turn takers is grandiosity. The idea here is that grandiosity serves to protect low self-esteem through the coaltional mechanism, by convincing others of one’s power and insights. I am not sure the available data really support this inflationary account of grandiosity.
I remain curious, how might coalitional threat explain misidentification delusions? What subprocesses of coalition would we need to delineate and dissociate so that someone could get Capgras delusion rather than Fregoil delusion (again, I know a domain general account struggles here too). What is the coalitional explanation of Cotard delusion? The social coalitional turn honors the power dynamics implicit in passivity delusions, but what links between coalitional cognition, action, intention, and proprioception would need to exist for the social turn to work?
3) If the extant data regarding social cognition and social contagion in people with schizophrenia are consistent with the coalitional cognition failure?
No doubt that people with schizophrenia have deficits in social cognition, and perhaps the tasks that have probed these challenges have failed to engage the underlying coalitional deficit, however, one would imagine that a foundational deficit would come readily to the fore, and explain more of the variability in delusions and/or hallucinations. The associations that have been reported are often specific to paranoia, rather than delusions or positive symptoms more broadly, and they are complex – dependent on IQ and negative symptoms [Bliksted et al. 2017
] – and sometimes counterintuitive; mild to moderately impairments to social cognition are associated with fewer positive symptoms, but more paranoia [Nelson et al. 2007
When the authors across previous posts talk about an evolved mechanism dedicated to social information, it brings to mind a module – though I know many reject that term. One could imagine a 2-factor account wherein the belief evaluation deficit (factor 2) was one of coalitional cognition. The 2-factor explanation of paranoia actually invokes rather domain-general mechanisms (of sensory or cognitive loss) as Factor 1 [Langdon et al. 2008
These raise uncertainty and demand belief updating. Ironically, this places the 2-factor explanation closer to my own – though of course I reject a strict separation between perceptual uncertainty and belief updating. Consider the elaborate visual hallucinations of Charles Bonnet Syndrome. The person experiencing these may, over time, come to question and reject their veracity despite their vividness and persistence. Here the abnormal experience does not usually generate paranoia though it can do, see for example [Makarewich 2011
The relevance of the jumping to conclusions bias to delusions is by no means certain [Tripoli et al. 2021
], and, its unclear whether contagion of such jumping should be increased or decreased in people with delusions. However, the elegant paradigm outlined by Sulik and colleagues
could be extremely relevant to folie a deux (wherein a non-psychotic person ‘catches’ a delusional belief from a close conspecific) and perhaps to the online radicalization toward conspiracy theorizing we have observed over the past year (folie a internet?). Such contagion (or lack thereof) may even be an empirical basis to distinguish delusions from other odd delusion-like beliefs.
I thank the previous authors for giving much food for thought. In my lab, we’ve taken their ideas very seriously. Based on our data, their data, and others, I am not quite ready to take the social turn, but I’ve learned a lot by considering it.