Tuesday, 15 June 2021

Agency in Youth Mental Health (2): Matthew Broome


Matthew Broome

This post is the second in a series of posts on a project on agency and youth mental health funded by the Medical Research Council and led by Rose McCabe at City University. The research team members were asked the same four questions and today it is Matthew Broome's turn to answer.

Matthew is an academic psychiatrist and Director of the Institute for Mental Health at the University of Birmingham. His main research interests lie in the field early psychosis and in the philosophy and ethics of mental health. 




What interests you about clinical encounters with young people in the mental health context?

There were two main drivers to my interest. One is very practical: as a psychiatrist I often see young people with mental health problems and am aware of the difficulties they can face in getting the help and understanding they would like. 

The second driver is more theoretical, but with potential clinical relevance. I became interested in how young people may change their conceptualisation of themselves, of their competence, agency, and responsibility, through the experience of clinical encounters and how the interactions with professionals could themselves shape that self-understanding, positively or negatively. Through studying these encounters, I hope we can improve the positive consequences for young people and lessen the negative impacts.

Why is the focus of agency important in this context?

Agency is important for us in the project as we want to examine how the young person is able to be an agent in the clinical encounter, that is, have their own views, opinions, and choices recognised and valued, and how we can improve these clinical meetings to further support agency. 

We’re aware that too much freedom and choice can itself be overwhelming, particularly when young people may be experiencing mental distress, but conversely, it’s important for young people not to be treated passively and not to have their views ignored. The agency of young people, both in terms of being a conveyor of knowledge and of being responsible for clinical and moral choices, is a key theme for us in the project and one that we see as being dynamic, and being in part connected with the interpersonal experience of the clinical encounter.

What do your experience and disciplinary background bring to the project?

I hope I have been able to bring my experience and background is as a medical doctor and clinical psychiatrist where I have worked with many young people with predominantly psychotic illnesses. In parallel, I have supported a close family member, who has had times of poor mental health, in their clinical consultations with professionals. In addition, I hope I have been able to bring a wider experience of how NHS services and professionals work, and how we train clinicians. 

In addition to clinical experience, I hope I have also brought an interest in philosophy and ethics of mental health care. Specifically, the ideas of the philosopher Miranda Fricker on epistemic injustice and how that can be experienced by young people with mental health problems, and themes from German Idealist philosophy, that run into twentieth century phenomenological-existentialism, namely, how the self is in some sense constituted by its relation with the Other.

What do you hope to see as an outcome of this project?

The main outcome I hope for is to take what we can learn from the clinical encounters and use it to help professionals in various settings (for examples, primary and secondary care, education) improve how clinical consultations are conducted, such that the mental health of young people is improved, and they are given support to enhance their agency in relation to their care and the choices they can make.

Monday, 14 June 2021

Agency in Youth Mental Health (1): Rachel Temple

Rachel Temple

This post is first in a series of posts on a project on agency and youth mental health funded by the Medical Research Council and led by Rose McCabe at City University. The research team members were asked the same four questions and today it is Rachel Temple's turn to answer.

Rachel is a Public Involvement & Research Manager at the mental health research charity, The McPin Foundation. At The McPin Foundation, she leads the McPin’s Young People Advisory Group and the wider young people’s network. She is passionate about ensuring meaningful involvement of young people in mental health research in ways that are comfortable, accessible, and engaging; regularly drawing from her social anxiety experiences when facilitating. 




Rachel is responsible for ensuring that no key decisions are made without consulting with the young people on this project; seeking their input on things such as project aims, design, results and presentation of findings. Rachel also identifies as a young person with lived experience of accessing mental health services, as well as a former support worker for young people’s inpatient services. To learn more about Rachel's work, follow the Young People's Network at the McPin Foundation on Twitter or Instagram.

What interests you about clinical encounters with young people in the mental health context?

My interest in this area comes from a lived experience perspective. As a young person with mental health issues myself, I have seen first-hand how instrumental those initial conversations are in installing trust. Feeling validated and like you have been listened to are some of the major things which help to establish that trust. 

Effective communication means absolutely everything in these encounters. For some, they really are ‘make or break’: if handled poorly, they can completely discourage a young person from ever seeking support again. This can cause a young person’s mental health to deteriorate further. As a former support worker for young people with mental health issues, I have seen this happen. I am deeply interested in what we can do to stop this from happening. Ultimately, I am interested in what we can learn about agency to improve these interactions and in turn, ensure that young people with mental health problems feel better supported.

Why is the focus on agency important in this context?

From what we have witnessed, the concept of agency - and the professional’s perception of the young person’s agency - can steer these interactions into a particular direction. We have seen examples whereby a young person is considered to have so much agency that their plea for support is dismissed. 

In some cases, they have even been held responsible for the difficulties that they are experiencing. Or basically just told that they should go ahead and do the things that the professional advises – like take their medication properly. It is important for us to understand why and how these judgments are formed about agency, how they impact mental health interactions - and what we can do about it.

What do your experience and disciplinary background bring to the project?

 

I wear multiple hats on this project: my lived experience expertise, my prior experience as a mental health support worker, and my current role as Young People’s Involvement lead at The McPin Foundation. I do my best to draw from all three where applicable. Above all, my role is to ensure that we are involving young people every step of the way and within every decision we make about the project.


What do you hope to see as an outcome of this project?

Personally, I have learned so much from this project, merely by listening to what other people on the team have to say. I hope for us to share learning with those who will benefit from it the most: mental health professionals and young people. 

If we can hammer home the importance of agency in establishing effective relationships between these groups, I think we could see a real positive shift in these interactions. To do this, I hope we can identify some of the major ingredients for a successful mental health interaction, and then share that knowledge in a meaningful, accessible and engaging way.





Tuesday, 8 June 2021

Social Approaches to Delusions (5): Turning Away from the Social Turn

Here is the fifth post in our series on social approaches to delusions. Today, Phil Corlett raises some concerns about the arguments proposed in favour of a social turn in the previous posts, offering a different perspective.


Phil Corlett


Lots of people I like and respect who think about delusions have recently decided that social processes are relevant to belief formation and maintenance and thence to delusions. I call this the social turn.

The preceding blog posts in this fascinating series suggest:

1) That we give testimony about the quality of other individuals as sources of testimony, and as such, we should define delusions and (given their social contents) delusions arise within individuals, through inherently social processes.

2) That testimonial abnormalities might be domain specific and dissociable from general reasoning abnormalities, and further that the socially specific deficit is one of coalitional cognition – how we form and sustain alliances with conspecifics.

3) That the brain’s statistical algorithms operate in the control centre of a unique primate that evolved to navigate a distinct (social) world of opportunities and risks and as such, any computational account of delusions should honor that social domain specificity

4) That there may be a new learning mechanism, through which jumping to conclusions – a bias widely held to be relevant to delusions - becomes contagious, jumping from individual to individual in a chain of agents drawing conclusions.

These seem like four very good reasons to take the social turn seriously. Other good reasons include the apparently social contents of delusions, as well as empirical data that seem to suggest that a domain specific coalitional mechanism is relevant to delusions [see Raihani and Bell 2018 and Bell et al. 2020].

 

With all due respect to these authors, and in deep appreciation and admiration of their work, I would like to push back today.

I wonder:

1) Whether we need to posit a domain specific mechanism, when perhaps a general learning mechanism might suffice? 

I want to be very clear, I acknowledge that humans are exquisitely social, and that we have specialized mechanisms for social cognition and interaction. We are influenced by the elegant work of Cecilia Heyes, who argues that much of what we call social cognition across species is actually driven by domain-general precision-weighted inference mechanisms [Heyes and Pearce 2015]. Put simply, we learn about other people as if they were cues with a mean expected value, and a reliability [Heyes et al. 2020] (this could be a mechanism through which we give testimony about others testimony). 

Evidence for this type of view is extensive. Some of the most compelling comes from developmental work in humans. Human infants’ domain-general associative learning abilities portend their social cognition and behavior later in life [Reeb-Sutherland et al. 2012]. I would like to suggest that much of social cognition involves ill-posed and recursive inference problems. These are hard problems. They tax the inference machinery extensively. Any insults to that inference machinery will impair social inference (as well as inferences more broadly). This would be consistent with our observations relating paranoia in patients, on the continuum, and perhaps even in rodents, to non-social precision-weighted updating [Reed et al. 2020]. We still need to get from our non-social deficit to an extremely social belief. 

Briefly, after Sullivan and colleagues, I think that having an enemy or persecutor can actually be reassuring. Perceiving that enemy as a source of misfortune increases the sense that the world is predictable and controllable, that risks are not randomly distributed [Sullivan et al. 2010] – blaming enemies might mollify the uncertainty that characterizes high paranoia, delusions, and psychosis more broadly. In settings where a sense of control is reduced, people will compensate by attributing exaggerated influence to an enemy, even when the enemy’s influence is not obviously linked to those hazards.

 

To be clear again, neither I nor Prof. Heyes disavow the presence or importance of domain-specific social mechanisms to human cognition and comportment, or indeed, that there are human-specific, and extremely impactful processes of social exchange (like language, in service of communicating meta-cognitive precision for interlocution and ideally shared belief updating [Heyes et al. 2020]). I would call these social-cognition proper.

I’d like to suggest that those inclined toward the social turn need to show that delusions are particularly related to these specific mechanisms (like theory of mind).

When social and non-social streams of information are available for inference by people who are highly paranoid, it is not clear that they have a specific problem with the social, that is not also present in handling the non-social [Suthaharan et al. 2021, Rossi-Goldthorpe et al. 2021].

In a recent meta-analysis of all functional magnetic resonance imaging studies of prediction error [Corlett et al. 2021], my colleagues and I found that there are regions (including the striatum, midbrain, and insula) that carry prediction errors across domains (like primary rewards, perception, and social variables). However, we also found some more domain-specific prediction errors, for example we saw prediction errors climbing the visual hierarchy during visual perception. 

Crucially, we found a social domain-specific prediction error in the dorsomedial prefrontal cortex (though – in something of a replication of what was found recently with direct recordings [Jamali et al. 2021]) this signal was present in non-social tasks, albeit less so). Perhaps one way that we might adjudicate between domain-general and social-specific accounts would be to show that delusions are more related to one or the other of these circuits, and the behaviors that they underwrite.

2) How well a coalitional cognition mechanism can explain the contents of all delusions? 

To be fair, this is also a problem for domain general theories, but, since the social turn is supposed to solve that problem for us, it is important to evaluate whether the social turn achieves its ends. I think it works best to explain paranoia, and, indeed the data so far have largely focused on the continuum of paranoia, rather than persecutory delusions. 

Commonly, the next delusional theme mentioned by social turn takers is grandiosity. The idea here is that grandiosity serves to protect low self-esteem through the coaltional mechanism, by convincing others of one’s power and insights. I am not sure the available data really support this inflationary account of grandiosity.

I remain curious, how might coalitional threat explain misidentification delusions? What subprocesses of coalition would we need to delineate and dissociate so that someone could get Capgras delusion rather than Fregoil delusion (again, I know a domain general account struggles here too). What is the coalitional explanation of Cotard delusion? The social coalitional turn honors the power dynamics implicit in passivity delusions, but what links between coalitional cognition, action, intention, and proprioception would need to exist for the social turn to work?

 

3) If the extant data regarding social cognition and social contagion in people with schizophrenia are consistent with the coalitional cognition failure? 

Sometimes people with delusions (or paranoia) rely excessively on others’ testimony [Rossi-Goldthorpe et al. 2021], sometimes they respond less to others’ suggestions [Hertz et al. 2021], and they can be overconfident in their own advice [Rossi-Goldthorpe et al. 2021, Hertz et al. 2020]. 

No doubt that people with schizophrenia have deficits in social cognition, and perhaps the tasks that have probed these challenges have failed to engage the underlying coalitional deficit, however, one would imagine that a foundational deficit would come readily to the fore, and explain more of the variability in delusions and/or hallucinations. The associations that have been reported are often specific to paranoia, rather than delusions or positive symptoms more broadly, and they are complex – dependent on IQ and negative symptoms [Bliksted et al. 2017] – and sometimes counterintuitive; mild to moderately impairments to social cognition are associated with fewer positive symptoms, but more paranoia [Nelson et al. 2007].

When the authors across previous posts talk about an evolved mechanism dedicated to social information, it brings to mind a module – though I know many reject that term. One could imagine a 2-factor account wherein the belief evaluation deficit (factor 2) was one of coalitional cognition. The 2-factor explanation of paranoia actually invokes rather domain-general mechanisms (of sensory or cognitive loss) as Factor 1 [Langdon et al. 2008]. 

These raise uncertainty and demand belief updating. Ironically, this places the 2-factor explanation closer to my own – though of course I reject a strict separation between perceptual uncertainty and belief updating. Consider the elaborate visual hallucinations of Charles Bonnet Syndrome. The person experiencing these may, over time, come to question and reject their veracity despite their vividness and persistence. Here the abnormal experience does not usually generate paranoia though it can do, see for example [Makarewich 2011]).

The relevance of the jumping to conclusions bias to delusions is by no means certain [Tripoli et al. 2021], and, its unclear whether contagion of such jumping should be increased or decreased in people with delusions. However, the elegant paradigm outlined by Sulik and colleagues could be extremely relevant to folie a deux (wherein a non-psychotic person ‘catches’ a delusional belief from a close conspecific) and perhaps to the online radicalization toward conspiracy theorizing we have observed over the past year (folie a internet?). Such contagion (or lack thereof) may even be an empirical basis to distinguish delusions from other odd delusion-like beliefs.

I thank the previous authors for giving much food for thought. In my lab, we’ve taken their ideas very seriously. Based on our data, their data, and others, I am not quite ready to take the social turn, but I’ve learned a lot by considering it.

Tuesday, 1 June 2021

Social Approaches to Delusions (4): Collectively Jumping to Conclusions

Justin Sulik

Here is the fourth post in the series on social approaches to delusions, after the posts by Miyazono, Williams and Montagnese, and Wilkinson. In today's post, Justin Sulik, Charles Jefferson, and Ryan McKay discuss their new paper, “Collectively Jumping to Conclusions: Social Information Amplifies the Tendency to Gather Insufficient Data”.

Justin Sulik is a postdoctoral researcher in the Cognition, Values and Behavior group at Ludwig Maximilian University of Munich. Charles Efferson is Professor in the Faculty of Business and Economics at the University of Lausanne. Ryan McKay is Professor of Psychology at Royal Holloway, University of London.

 

Charles Efferson


Human beings are inveterate misbelievers. At the individual level, our propensity to false beliefs about our prowess and prospects can be costly and dangerous: promoting harmful behaviours like unsafe driving, smoking and overspending. Spreading and amplifying in large groups, however, such misbeliefs – we might call them “collective delusions” – can have catastrophic consequences. Widespread suspicions that vaccines cause more harm than good, that coronavirus is just another flu that will pass, or that climate change is a hoax, decrease people’s intentions to vaccinate, to self-isolate, or to reduce their carbon footprint, triggering or exacerbating global public health emergencies and environmental disasters.

How can we explain collective delusions? Psychological explanations typically appeal to two main causal pathways (see a and b in the figure). The first is a matter of individual psychology: some of us have cognitive biases or deficits that render us, in isolation, more prone to misbeliefs. The tendency to “jump to conclusions”, for instance – forming beliefs on minimal evidence – is thought to play a role in the formation of clinical delusions. 





The second pathway is social: People are influenced by the beliefs of those around them, particularly by those with status and prestige. For instance, when a high-status individual like President Trump claims that vaccinations are causing an autism epidemic; that the coronavirus pandemic is totally under control; that President Obama was born in Kenya, or that global warming is a hoax created by the Chinese, these beliefs are likely to increase among the population.

In a new pre-registered study, we test whether a third causal pathway exists (pathway c in the figure), whereby social information affects the cognitive biases themselves, in addition to the beliefs those biases occasion. Can social context explain not just what people learn but also how they learn?

We investigate the “jumping to conclusions” bias specifically. To explore whether a social context can amplify this individual learning bias, we embedded a probabilistic reasoning task (the well-known “Beads Task”) in a social transmission-chain procedure. We recruited 2000 participants and randomly assigned them to one of 100 chains or groups (20 participants per group). Within each group, an initial participant (assigned to Position 1 within their group) undertook the Beads Task. They had to draw coloured beads from an urn, filled with beads of two colours, in order to discover the colour of the majority of beads in the urn. Each participant could decide how much evidence they gathered (specifically, how many beads they wanted to draw) before making their decision about the majority colour.



Ryan McKay


Subsequently, a second participant (assigned to Position 2 within their group) was given information about how many beads the first participant gathered before they also undertook the Beads Task. Thereafter, the participant in Position 3 was given information about the data-gathering decisions of participants in Positions 1 and 2, and so on. This was the procedure for half of the groups (our social condition). The other half were assigned to an asocial control condition, which was identical except that participants were not given information about the data-gathering decisions of previous participants in their group.

We gave a small monetary reward to participants who correctly guessed the majority colour of the urn, but also made them pay for each bead they drew before making this guess. This meant there was always a rationally optimal amount of evidence to gather. Across the board, our participants jumped to conclusions, drawing fewer beads than they optimally should have done. Crucially, this tendency was amplified when participants could observe the evidence-gathering behaviour of others in their chain (the social condition), relative to those who could not (the asocial control). Effectively, participants in the social condition “caught” this tendency from their forebears, and as a result were more likely to arrive at false beliefs about the majority colour, and to earn less money.

To contextualise this, let’s return to President Trump. Aside from the kinds of beliefs Mr Trump endorses, he has also displayed a certain attitude to evidence. For example, he is famously allergic to reading. According to Trump’s former chief economic adviser, “Trump won’t read anything—not one-page memos, not the brief policy papers, nothing.” Trump himself has acknowledged that he prefers reports to contain “as little as possible.” Now, again, Trump is a prestigious individual and many people follow his lead. But in this case what they might “catch” from him is not a specific belief (e.g., that vaccines cause autism), but a learning strategy – a means by which people acquire beliefs. In particular, they might acquire a disregard for evidence (at least of the written kind).

Consistent with this, our study demonstrates that when individuals see others collecting minimal evidence before making a decision, they too are more inclined to collect minimal evidence. People who form beliefs on the basis of minimal evidence are more likely to be wrong – so, in shifting the focus from the diffusion of false beliefs to the diffusion of suboptimal belief-formation strategies, we have identified a novel mechanism whereby misbeliefs arise and spread.