Tuesday 31 December 2019

Explaining Delusional Beliefs: a Hybrid Model

In this post Kengo Miyazono (Hiroshima) and Ryan McKay (Royal Holloway) summarise their new paper “Explaining delusional beliefs: a hybrid model”, in which they present and defend a hybrid theory of the development of delusions that incorporates the central ideas of two influential (yet sometimes bitterly opposing) theoretical approaches to delusions—the two-factor theory and the prediction error theory. 



There are at least two influential candidates for a global theory of delusions (i.e., a theory that explains many kinds of delusions, rather than particular kinds of delusions such as persecutory delusions) in the recent literature: the two-factor theory (Coltheart, 2007; Coltheart, Menzies, & Sutton, 2010; Coltheart, Langdon, & McKay, 2011), according to which delusions are explained by two distinct neurocognitive factors with different explanatory roles, and the prediction error theory (Corlett et al., 2010; Corlett, Honey, & Fletcher, 2016; Fletcher & Frith, 2009), according to which delusions are explained by the disrupted processing of prediction errors (i.e., mismatches between expectations and actual inputs).

Which one is correct; the two-factor theory or the prediction error theory? Recent years have seen vigorous debates between the two camps. A recent example was this paper, “Factor one, familiarity and frontal cortex: a challenge to the two-factor theory of delusions”, in which Phil Corlett, one of the main figures in the prediction error camp, challenged some basic assumptions of the two-factor theoretic account of the Capgras delusion. Some of the discussions between the two camps have been hosted on this blog:

Our view, however, is that we do not have to choose one theory at the complete expense of the other. In fact, there are good reasons to seek a rapprochement between the two theories. For instance, the two-factor theory (as a general framework) tends to be rather agnostic about mechanistic details. By adopting some ideas from the prediction error theory camp, we might achieve a better understanding of the nature (and neurophysiological cause) of the second factor. Conversely, by adopting some ideas from the two-factor theory camp, we might better understand how alleged abnormalities in processing prediction errors manifest themselves at the psychological level of description.

We have previously argued that the two theories might not be irreconcilable alternatives (McKay, 2012; Miyazono, 2018; Miyazono, Bortolotti, & Broome, 2014). In support of our position, our new paper advances a particular hybrid theory of delusion formation, arguing that key contributions of the two theories can be combined in a powerful way.

According to the hybrid theory, the first/second factor distinction in the two-factor framework corresponds to a crucial distinction in the prediction error framework, namely, the distinction between prediction errors and their estimated precision. More precisely, we contend that the first factor (at the psychological level) is physically grounded in an abnormal prediction error (at the neurophysiological level), and the second factor (at the psychological level) is physically grounded in the overestimation of the precision of this abnormal prediction error (at the neurophysiological level). (Note: The “physical grounding” is a placeholder for whatever it is that relates psychological and neurophysiological levels of explanation.)

Here is how this theory applies to the Capgras delusion.

First Factor & Prediction Error: We follow the standard account in the two-factor theory camp that the first factor in the Capgras delusion is the abnormal datum about a familiar face. This abnormal datum is physically grounded in an abnormal prediction error; i.e., a mismatch between the expected and actual autonomic response to a familiar face (cf. Coltheart, 2010).

Second Factor & Estimated Precision: We adopt the hypothesis that the second factor is a “bias towards observational (or explanatory) adequacy” (“OA bias”); i.e., the tendency to form beliefs that accommodate perceptions, even where this entails adjustments to the existing web of belief (Stone & Young, 1997; McKay, 2012). The OA bias, we contend, is physically grounded in the overestimation of the precision of abnormal prediction errors (in which the first factor is physically grounded). When the precision of an abnormal prediction error is overestimated, the abnormal prediction error is prioritised over prior beliefs, and it drives bottom-up belief updating processes (cf. Adams et al., 2013; Fletcher & Frith, 2009). In effect, this is the OA bias.

This hybrid account can be easily generalised to many other delusions. In fact, this theory, because of its hybrid nature, has a wide scope of application. The two-factor theory provides a plausible account of a range of monothematic delusions that can arise due to neuropsychological deficits. In contrast, the prediction error theory provides a plausible account of delusions in schizophrenia. Our hybrid theory provides a unified explanation of both types of delusions.

Of course, the hybrid theory as it stands does not answer all questions about the process of delusion formation. For example, it is not clear how the hybrid theory accommodates a role for motivational factors in delusion formation (McKay, Langdon, & Coltheart, 2005). Relatedly, although the hybrid theory has a wide scope of application, it might not explain all delusions. A particularly difficult example would be anosognosia, which we may need a separate account of (for more on the hybrid theory and delusion in anosognosia, see Miyazono, 2018).

Friday 27 December 2019

Autonoesis and Moral Agency

This post is by Phil Gerrans and Jeanette Kennett. It is a reply to the post we published on Tuesday on Metaethics and Mental Time Travel.



In Metaethics and Mental Time Travel, Fileva and Tresan (F&T) fairly and accurately reconstructed (improved?) and intricately dissected our paper. We cannot follow every twist and turn in a short blog post so concentrate on the key issue. They partially agree with us that semantic knowledge detached from diachronic self-awareness is insufficient for moral agency but disagree (i) whether that awareness needs to be "richly experiential" and (ii) the nature of diachronic deficits in the cases we discuss (see their discussion of these cases which is deeper than ours). As they say,

Representations with past- or future-oriented, autobiographical content, crucially, awareness of one’s past actions or future options as consistent or inconsistent with one’s principles do seem necessary: but MTT involves experiential representations of those sorts, and we seem able to conceive of agents motivated by principle without such experiential representations. Of course, in real humans those representations are often richly experiential. But if they’re required, perhaps it’s only by human nature.

We note that we would be happy to have made a claim about real humans and human nature but we take it that their disagreement is not that we haven’t covered possible worlds in which amnesics retain experiential diachronic self-awareness and hence agency! Indeed, see (ii) above, they interpret the neuropsychological cases differently than us arguing that a sufficient degree of diachronic self-awareness may be retained even if rich experiential diachronic self-awareness is lost (our italics).

Our point, which we took from the neuropsychological literature was that in these cases autonoetic awareness (and hence, we argued, agency) was missing. Autonoetic awareness is a term of art introduced by Tulving to refer the feeling that an experience is "mine". This aspect of subjective experience seems subtle and elusive rather than "rich". So much so that its nature comes into focus when it is lost in cases of neuropsychological damage. Rather as the experience of familiarity tends to be obscured in the normal flux of experience and becomes salient when lost in cases of defamiliarisation such as Capgras delusion or Jamais Vu. These cases contrast with rich perceptual experience in which structural detail which can be made the focus of attention yielding more information.

As an example consider RB a patient who lost the sense of autonoesis for his episodic memories (quoted in Klein 2012). He described his experience thus.
When I remember the scene with my friends, studying, I remember myself walking into the room... and... other things I did and felt... But it feels like something I didn’t experience... (something I) was told about by someone else. It’s all quite puzzling (Our italics).
He continued:
I can see the scene in my head... I’m studying with friends in the lounge in the residence hall. But it doesn’t feel like its mine... that I own it. It’s like imagining the experience, but it was described by someone else.
Whatever precisely R.B. is missing it seems quite a subtle feature of experience. Our idea was that people who cannot automatically experience representations, including of their prior moral principles, as their own will have compromised agency. And if agency is necessary for judgement, compromised moral judgement. The ways in which the experience of "mineness" can be affected differ in qualitatively and in degree so it is difficult to make a categorical judgement on the basis of cases like that of R.B., H.M. and E.V.R. If however self-knowledge is confined to the third personal and propositional we stand by the view that agency is compromised. R.B. is of course an interesting case since, if he is accurately describing his experience in theory-neutral terms then he is a case of intact episodic memory without autonoesis. A dissociation not contemplated in the earliest characterisations of the concept of autonoesis.

How to proceed without rehearsing interpretations of case evidence and the subtleties of metaethical arguments (e.g. whether and to what degree there is convergence between sophisticated/inclusive versions of rationalism and sentimentalism). Consider a judge with episodic amnesia and ex hypothesi a deficit in MTT and autonoesis. She has intact semantic memory, legal knowledge, reasoning and executive capacities so she is able to decide cases synchronically. She just cannot remember that she has done so, but if the matter or a similar one comes up before her again she can evaluate the arguments (again) and reach the correct decision. So her legal reasoning is consistent over time and across cases. Does such a judge have legal competence?

Many would say yes. This answer suggests that synchronic capacities absent diachronic awareness are sufficient for legal agency, understood as the ability to make the right decisions in context.

Now consider the judge deciding whether to get divorced, to retire to another country or to how invest her savings. The relevant information is accessible to her as propositional knowledge. And grant that when she wakes up one morning in Malaysia and asks why she is there and where her family are, she can understand the explanation. She decided previously to get divorced, give her money to her children and move overseas. But she lacks the sense that the decision was hers. It might as well have been made by someone else.

Is the judge a full moral agent?

One can answer this by consulting one's intuitions or by via a moral theory which produces the answer as output. E.g. For a moral externalist presumably yes. Our approach, which we think we share with F&T is a kind of reflective equilibrium, allowing the moral theory to be influenced by empirical evidence without ruling H.M. R.B. or E.V.R. in or out as moral agents a priori.

Tuesday 24 December 2019

Metaethics and Mental Time Travel

We are Iskra Fileva and Jonathan Tresan. Both of us teach philosophy, at the University of Colorado, Boulder and at the University of Rochester, respectively. We recently wrote a paper in response to "Neurosentimentalism and Moral Agency," by Philip Gerrans and Jeanette Kennett published in Mind in 2010. We summarize our paper "Metaethics and Mental Time Travel" here.



When we make moral judgments, we often experientially project ourselves into the past or the possible futures, a capacity dubbed “mental time travel” (MTT). For instance, in judging whether her mom was wrong to keep her away from her dad after the parents divorced, Sally may try to recall what it was like to be her father’s daughter. Was it a good experience or a bad one? Was the mother rightly protective or just trying to spite the dad? Sally’s evaluation will likely be informed not just by propositional memories (e.g., “My father was born in June”) but by richly detailed and vivid first-personal experiential memories (e.g., what it felt like to hike with Dad).

Similarly, in deliberating about what to do in the future, we don’t just contemplate future-oriented propositions (e.g., “My mother would get upset if I said that”) but entertain experientially rich scenarios, for instance, Sally might imagine her mother’s reactions when deliberating about what to do. When we do such things, we exercise our capacity for MTT.

MTT is in fact involved in moral judgment in myriad ways. That is uncontroversial. In their paper, however, Gerrans and Kennett argue that a capacity for MTT is essential to moral judgment, a requirement they label “diachronicity.” And this fact, they argue, cuts major metaethical ice, providing support for a Kantian metaethical view according to which moral judgments are intrinsically motivating exercises of a rational faculty.

They give three main reasons for this Diachronicity Constraint. First, they cite the behavior of subjects with impaired capacities for MTT such as amnesiacs and certain vmPFC patients. The difficulties these subjects have in making moral judgments, they say, evinces the need for MTT in moral judgment. Second, they argue that MTT deficits necessarily undermine moral agency, but moral agency in turn, they argue, is necessary for moral judgment. Third, they claim that the Diachronicity Constraint follows from the essential normativity of morality.

In our paper we rebut all three of their arguments. First, at least some of the people with impaired MTT capacities discussed by Gerrans and Kennett seem quite capable of making moral judgments. We focus on the case of E.V.R., described by Eslinger and Damasio (1985). E.V.R., we contend, has decision-making deficits but is quite capable of judging morally. Eslinger and Damasio report that he was quite capable of reasoning intelligently when presented with a version of the Heinz dilemma. Gerrans’ and Kennett’s case, we suggest, relies on a tendentious interpretation of the evidence.

Second, even if moral agency requires MTT as Gerrans and Kennett claim, it is implausible that moral agency is required for moral judgment. A person may be fully capable of judging morally without being a moral agent, say because her will is thoroughly impaired due to depression or addiction. 

Third, Gerrans’ and Kennett’s case that MTT is necessary in order for morality to be normative relies on an unsupported rationalist interpretation of what morality’s normativity consists in. On that interpretation, reasons are normative for a person only when they are independent of immediate stimulus bound responses. It is a short step from here to the conclusion that we must accept the Diachronicity Constraint. 

But the rationalist interpretation of normativity is not the only game in town. Accounts of moral normativity including accounts that omit MTT from the necessary conditions for normativity are abundant (for instance, sentimentalist Michael Slote derives an account (2007) of normativity from empathy and its role in moral reasoning). The metaethicists Gerrans and Kennett target with the Diachronicity Constraint can and do call upon alternative accounts. It would not do for Gerrans and Kennett to argue for Diachronicity by assuming the truth of an account of normativity that supports the Constraint.

Tuesday 17 December 2019

Frozen II and Youth Mental Health

In this post I reflect on what the Disney film Frozen can tell us about youth mental health. (This is a slightly expanded version of a post that appeared on the University of Birmingham website on 16th December 2019.)

When it was released in 2013, Frozen was praised for having a leading female character who was different: a guest at Elsa’s coronation calls her a monster when she loses control; Elsa isolates herself from the people she loves for fear of harming them; and she is distressed because she does not fully comprehend what is happening to her. Elsa does not ‘fit in’, and often makes those around her feel uncomfortable.




When Elsa celebrates her liberation from her stuffy conventional life with the song “Let it go”, some critics talked about Disney’s ‘gay agenda’ and Elsa was welcomed in some circles as a queer icon. Some were hoping that she would get a girlfriend in Frozen II. But there is another form of diversity that Elsa embodies just as convincingly, that of a young person who struggles with her mental health, attempts and fails to suppress those unusual experiences that make her different, and is (at first) neither fully understood nor supported by her immediate social circle.

At the end of the original Frozen film, the renewed love and understanding of her sister come to the rescue, and Elsa learns to control her 'power'. She agrees to play her role as queen, even showing that her quirkiness can have some unexpected benefits. However, the last scene where she turns the palace courtyard into an ice-rink leaves several questions hanging in the air. Now knowing that she is different, will her people trust her not to disrupt their ordinary lives again? Will she be able to adjust to a life that never felt like her own just because she is allowed to play with a little ice?

Those are the questions to be answered in the sequel. If the mental health angle might have been dismissed in Frozen as reading too much into the character (but see this and this), Frozen II confirms that Elsa has experiences that other people do not share and do not understand. Indeed, the whole premise of the film is that Elsa hears voices, not just any voice, but a voice that makes her doubt the nature of the reality she is supposed to accept and invites her to distance herself once again from a normality that she never found authentic. 

In the course of the film, Olaf and Anna keep referring to Elsa as someone who may lose credibility or need protection because of her difference, and as someone who takes unnecessary risks and is bound to lose control in critical situations. Elsa appears frustrated as she appreciates the concerns of her sister and friends, but does not want to be sensible—or she does not know how. In the end, Elsa’s difference is openly acknowledged, and as a result ‘normality’ is no longer imposed on her. She remains an outsider, one who has learnt to tame her demons by embracing her difference.

Young people who struggle with their mental health may not meet Hollywood beauty standards as Elsa does, enjoy the privileges of a queen’s wealth, or have crowd-pleasing superpowers such as the capacity to make stunning ice sculptures on a whim. Reality is still a world away from a Disney fairy tale. However, it is refreshing to see that a Christmas blockbuster for children invites us to reflect on how having unusual experiences can affect young people’s lives and the lives of those around them, for better or worse. 

In particular, the film addresses some key issues that we investigated as part of project PERFECT: the importance of supportive personal relationships for the capacity that young people have to manage difficult situations; and the risks of self-doubt and self-stigma, when young people come to think of themselves as unreliable or dangerous due to the influence of society’s prejudices, because they see themselves through the eyes of others. We cannot change popular culture overnight, but creating opportunities to talk and think about the effects of unusual experiences is important, and Frozen II is a good conversation starter.

Tuesday 10 December 2019

Epistemic Norms for the New Public Sphere

Today's post is by Natalie Ashton (University of Stirling). She is reporting on a workshop held at the University of Warwick on 19th of September as part of the AHRC-project Norms for the New Public Sphere. It was the first of a series of workshops planned to take place over the next two years. These workshops are designed to bring together academic philosophers with media scholars, professionals, and activists in order to investigate the opportunities and challenges that new social media pose for the public sphere. This first workshop focused on the epistemic norms that can foster a public sphere in which democracy can flourish.





Alessandra Tanesini kicked the event off with a talk titled “Bellicose Debates: Arrogant and Liberatory Anger On and Off-line”. Her main claims were that anger can be divided into different kinds: status anger, which is typically arrogant, and liberatory anger, which she says can offer distinctive motivational, epistemic and communicative benefits in the fight against oppression. She also argued that some calls for civility in online debates are complicit in claimant injustice, when liberatory anger is muted because it is mis-interpreted as mere ranting or venting.

The next two talks had a common theme: both cautioned against the misuse of inaccurate terms for problematic online phenomena on the grounds that this can disguise their pernicious effects. In her talk “Echo Chambers, Fake News, and Social Epistemology” Jennifer Lackey argued that the problem with Trump’s reliance on Fox news is not, as is often claimed, that it allows him to exist in an echo chamber. In fact, Lackey argues, echo chambers can be a good thing. Rather, Trump’s problem is that his favourite TV channel reports fake news (or lies) and that this is amplified online by fake news approvers (or bots). Lackey went on to make a distinction between criticising the structure of epistemic environments, and their content, and argued that social epistemologists should be less afraid of making the second kind of criticism.

Tuesday 3 December 2019

Norms for Political Debate: An Interview with Fabienne Peter

For today's post I interviewed Fabienne Peter, Professor of Philosophy at the University of Warwick, specializing in political philosophy, moral philosophy, and social epistemology. She talks about her research interests, a new exciting project she is participating in, and the role of philosophers in public life.




LB: How did you become interested in the norms that govern political debate?

FP: I’ve been doing research on the question of what makes political decisions legitimate for some time now. This research has led me to see that an inclusive and fair political debate is an important condition for legitimate political decision-making.

Inclusive and fair political debate of political issues matters in a number of ways. It helps to gather relevant considerations that bear on the decision-making, for example in relation to the implications of possible political decisions for different people. It also helps to weigh the importance of those considerations. Political debate matters in the context of informed democratic decision-making, but also for decision-making by political representatives, e.g. by a prime minister.

It is a mistake to think that regular democratic elections or referenda are sufficient for the legitimacy of democratic decisions. The political debate in which elections or referenda are embedded helps with making sense of the decision-making process and with interpreting its results. For example, in relation to the recent Brexit referendum, it is important to see that this referendum did not give a mandate for no deal. No deal was explicitly ruled out as a viable option by all sides in the debate leading up to the referendum in March 2016.

If we accept that a well-ordered political debate is an important condition for legitimate political decision-making, the next question is, then, which norms a well-ordered political debate must satisfy. And I believe that we cannot answer this question by just focusing on moral norms. A well-ordered political debate must also effectively respond to our best knowledge about how different political decisions affect people, when such knowledge is available, and it must respond appropriately when political decision-making is affected by inconclusive evidence, uncertainty, and disagreement. In other words, a well-ordered political debate will also satisfy certain epistemic norms – norms about what to believe about our political circumstances and the best decisions in those circumstances.

LB: How do you think political debate has changed with the emerging of social media platforms? Can you see positive as well as negative consequences of the new media?

FP: When internet technology first reached a stage where it became possible to engage in political debate online, there was great optimism about the democratising potential of the internet. By reducing the cost of political participation and eliminating other barriers to access, the new technology was seen as having the potential to make political debate more inclusive and fair. While the democratising effects remain important, we are now becoming aware that there are potentially pernicious effects, too. Social media platforms have led to new forms of segregation in political debate and enabled targeted political influencing that is only accessible to some groups and not to others.

I can thus definitely see positive and negative consequences of the new media. On the one hand, there is potential for greater inclusivity and, through that, greater scrutiny of controversial political claims and proposals. On the other hand, more and more fine-grained virtual political segregation has become possible, which undermines inclusive political debate and facilitates bad faith manipulation of the political decision-making process.

LB: You are co-investigator in an exciting project, Norms for the New Public Sphere: Institutionalising Respect for Truth, Self-Government, and Privacy, funded by the AHRC. What do you and your team hope to achieve at the end of the project?

FP: It is an exciting project! This collaborative project aims to identify the set of norms – moral norms and epistemic norms – that can underpin regulatory frameworks for the new public sphere. By ‘new public sphere’ we mean the way in which political debate is now influenced by social media technology and related technology, for better or worse. Existing regulatory frameworks that influence political debate tend to focus on the press.

As political debate has now largely moved online, existing regulatory frameworks are no longer adequate to ensure a well-ordered political debate. And while there is currently quite a bit of interest in the question of how to regulate the social media sphere, there is more focus on online harms (cf. the recent Government White Paper) or other ways in which social media platforms might be used for criminal activity. Not enough attention has been paid to the question of how to regulate the way in which social media platforms influence political debate, a question that we believe is vital for a healthy democracy.

Our project is a philosophical project. We are thus focusing on the moral and epistemic norms that could underpin the regulation of the new public sphere, rather than on the question of what an adequate regulatory framework would look like. But our project is not purely philosophical and includes practitioners. The original idea for the project came out of discussions between Jonathan Heawood, CEO of Impress, a media regulation NGO, and Rowan Cruft, a philosopher. They then brought me on board because of my work on the epistemic norms that apply political deliberation. The project now also includes Natalie Ashton, a philosopher who works on political epistemology. And it has Doteveryone, a technology think tank, as a project partner and we are also involving other media professionals as well as representatives of government agencies involved in social media regulation.

LB: You have taken steps to engage the public in your work, and wrote for the Conversation and Philosopher’s Magazine. You also participated in panel discussions open to the public and podcasts. How do you see the role of the philosopher in public life?

FP: I believe in the value of specialisation. Philosophy, like other disciplines, too, continues to advance our understanding of important issues. In my area, for example, philosophers have achieved a sophisticated understanding of the nature of morality and, relatedly, the justification of moral and political claims. And in order to facilitate such advancements, philosophers must be able to narrow their focus and work on some highly abstract questions, questions that do not immediately seem relevant, or even make sense, to non-specialists. 

But I also believe that there can also be too much specialisation or, as economists call it, path-dependency. Specialisation can put researchers at risk of working on questions that only make sense derivatively – given the research tradition in which they stand – and that no longer relate back to important issues. Exploring how my research might be of interest and relevance to non-philosophers provides me with an important test for the overall value and significance of my research programme and a safeguard against excessive specialisation.

There is another reason why I believe that doing public philosophy matters in these turbulent political times. We can’t take our achievements for granted at the moment. Lessons that I thought the world had learned – often on the back of anti-slavery, anti-imperialist, and civil rights movements, for example, and of events such as the second world war – are now too often ignored and denigrated. I believe that as philosophers we have a responsibility to go public with our work in this context. We should add our voice to public debates, making vivid the important lessons already learned and testing new ideas.