Tuesday, 30 July 2019

Psychopathy, Identification, and Mental Time Travel

Luca Malatesti and Filip Čeč collaborated on the project Classification and explanations of antisocial personality disorder and moral and legal responsibility in the context of the Croatian mental health and care law (CEASCRO), funded by the Croatian Science Foundation (Grant HRZZ-IP-2013-11-8071). 

Both are based in the Department of Philosophy of the Faculty of Humanities and Social Sciences in Rijeka (Croatia). Luca is associate professor of philosophy and works mainly in philosophy of mind and philosophy of psychiatry. Filip is assistant professor of philosophy and his interests include the metaphysical problem of free will and moral responsibility, and the history of psychiatry. In this post Luca and Filip summarize their chapter "Psychopathy, Identification and Mental Time Travel", that is contained in the collection edited by Filip Grgić and Davor Pećnjak, Free Will & Action.

Psychopaths are characterised by a callous, manipulative and remorseless behaviour and personality. In recent years, scientific research on psychopathic offenders, but also on the so-called successful psychopaths, who do not necessarily offend, has increased considerably. Robert Hare’s Psychopathic Checklist Revisited (PCL-R) is a diagnostic tool that has played an important unifying role in this research (Hare 2003).

The issue of the legal and moral responsibility of persons classified as having psychopathy has attracted philosophical attention (Kiehl and Sinnott-Armstrong 2013; Malatesti and McMillan 2010). Some have maintained that the capacity for mental time travel might be relevant for moral responsibility and that psychopaths lack these capacities (Kennett and Matthews 2009; Levy 2014; Vierra 2016). In relation to the past, mental time travel is the capacity to have memories of past episodes in which the agent was personally involved. In relation to the future, mental time travel involves prospection, the capacity to imagine future situations where the agent might be involved.

Thursday, 25 July 2019

Care and Self-harm on Social Media: an interview with Anna Lavis

Anna Lavis is a Lecturer in Medical Sociology and Qualitative Methods in the Social Studies in Medicine (SSiM) Team in the Institute of Applied Health Research at the University of Birmingham. She also holds an honorary research position in the Institute of Social and Cultural Anthropology, University of Oxford.

Her work explores individuals’ and informal caregivers’ experiences and subjectivities of mental illness and distress across a range of social and cultural contexts, both offline and on social media, with a particular focus on eating disorders and self-harm. 

In this post Eugenia Lancellotta interviews Anna on her latest project, Virtual Scars: Exploring the Ethics of Care on Social Media through Interactions Around Self-Injury, funded by the Wellcome Trust, Seed Award in Humanities and Social Science.

EL: How did you become interested in the ethics of care in self-harming online communities?

AL: I started work on relationships between social media and mental health during my MSc in 2002/2003. At that time, my research was focused on Anorexia Nervosa and the investigation of pro-anorexia websites and I then completed a PhD in which I conducted an ethnography of pro-anorexia websites and an eating disorders unit side-by side. From this research it was clear that rather than solely promoting dieting and anorectic behaviours, pro-anorexia websites also offered a non-judgmental platform; people struggling through eating disorders shared their experiences and cared for one another in ways that were undeniably ambivalent and complex but also potentially life-saving.

More recently, I began to wonder whether similar dynamics of care were found in online discussions related to self-harm, and if so, what the meanings, ethics and implications of such care might be. This is how I got inspiration for the project on the ethics of care of self-harm interactions on social media.

EL: Why do you think people go online searching for self-harming communities?

AL: Mainly for two reasons. The first is because people who may be self-harming often feel misunderstood and stigmatized by society. The second is because offline support services are not easily accessible or may be difficult to access - such as at night - and they too are not free from prejudice. Social media may fill these gaps, offering a platform where people can share their experiences and get virtual support from other people with similar experiences 24 hours a day.

Policy makers should consider these aspects more carefully when banning some of the content of these interactions (such as self-harm imagery), as this risks also preventing access to what can be precious forms of support for a young person struggling with self-harm or an eating disorder. For example, on social media participants may provide mutual support by virtually 'sitting with' a person in crisis, or offering ways to resist the urge to self-harm.

Everyone wants to be listened and understood, it is a shared human need which society does not always meet when it comes to people suffering from mental health conditions or living through trauma and distress. Online communities can be a lifeline.

EL: You have highlighted some arguments in favour of self-harming online spaces. What do you think the drawbacks might be?

AL: The main one is that interactions on social media can contribute to normalizing behaviours that may be dangerous, such as self-starvation or self-harm. However, we must bear in mind that there is a difference between normalization and causation. There is no evidence, for example, that seeing self-harming imagery online causes a young person to self-harm. Our work has very strongly found that a young person seeking self-harm imagery or discussions on social media is already self-harming.

I am not denying that the normalization of self-harming behaviour may be dangerous, but at the same time online communities are a more complex reality than depicted by social media and politicians, as they often respond to needs that society is failing to meet. Banning the hashtags and the imagery related to self-harm – as Instagram has recently done – also denies access to the support linked to those resources.

EL: What has been the impact of your project so far?

AL: This project is having some impact on policymakers. This year we have presented findings to the APPG (All-Party Parliamentary Group Inquiry) into mental health and social media, as well as to clinicians, third sector organisations and national suicide prevention leads.

We also recently discussed our research in the online academic blogsite, The Conversation (here).

Tuesday, 23 July 2019

Biased by our Imaginings

Today’s post is written by Ema Sullivan-Bissett, who is a Lecturer in Philosophy at the University of Birmingham. Here she overviews her paper ‘Biased by Our Imaginings’, recently published in Mind & Language.

In my paper I propose and defend a new model of implicit bias according which they are constituted by unconscious imaginings. As part of setting out my view I defend the coherence of unconscious imagination and argue that it does not represent a revisionary notion of imagination.

Implicit biases have been identified as ‘the processes or states that have a distorting influence on behaviour and judgement, and are detected in experimental conditions with implicit measures’ (Holroyd 2016: 154). They are posited as items which cause common microbehaviours or microdiscriminations that cannot be tracked, predicted, or explained by explicit attitudes.

The canonical view of implicit biases is that they are associations. The idea is that one’s concept of, say, woman is associated with a negative valence, or another concept (weakness) such that the activation of one part of the association triggers the other. On this view implicit biases are concatenations of mental items, with no syntactic structure.

Recently though, there has been a move away from the associative picture to thinking of implicit biases as having propositional contents and as not being involved in associative processes. This kind of view is motivated by some empirical work (reviewed at length in Mandelbaum 2016). In light of this shift, new models of implicit bias have been proposed to accommodate their propositional nature, these include models according to which implicit biases are unconscious beliefs (Mandelbaum 2016), and patchy endorsements (Levy 2015).

Thursday, 18 July 2019

CauseHealth: An Interview with Rani Lill Anjum

Today I interview Rani Lill Anjum on her exciting project CauseHealth. Rani works as a philosopher of science at the Norwegian University of Life Sciences (NMBU) and is the Director of the Centre for Applied Philosophy of Science (CAPS), always at NMBU.

LB: How did you first become interested in causation in the health sciences?

RLA: I started thinking about causation in medicine back in 2011, when I was working on my research project Causation in Science. Many of my collaborators already had an interest in philosophy of medicine, and I started thinking that if causation was complicated in physics, biology, psychology and social science, then medicine must be the biggest challenge. After all, a person is the unity of them all, as physiological, biological, mental and social beings. Also, our health is causally influenced by or even the result of what happens to us at all these levels.

LB: What would you describe as the main finding of CauseHealth now that it is drawing to a close, and what do you expect its implications to be?

RLA: In the beginning, I didn't know very much about medicine or philosophy of medicine, so I had some naïve idea about who the target group from the health profession would be. Now I understand why we have met most enthusiasm from the clinicians, since they are the ones working with individual patients. In the last year of CauseHealth, we have therefore worked more toward clinicians, especially those who feel a squeeze between the public health agenda of evidence based medicine and the clinical needs of their individual patients.

In public health, the aim of medical research is to say something general to the population, typically based on statistics from clinical studies. However, in the clinic, one will also meet patients who are not represented in the clinical trials. In CauseHealth, we have emphasised a dispositionalist understanding of causation, as it was developed by myself and Stephen Mumford in Getting Causes from Powers (OUP 2011).

Here we argue that causation is essentially complex, context-sensitive, singular and intrinsic. In medicine, this translates to
  • genuine complexity rather than mono-causal models
  • heterogeneity instead of homogeneity
  • medical uniqueness rather than normal or average measures
  • intrinsic dispositions rather than statistical correlations.

This is very different from what one would get from other theories of causation, especially empiricist theories such as the regularity theory of David Hume or the counterfactual theory of David Lewis. Scientific methodology, however, actually relies heavily on these notions of causation, in the consistent search for regularities of cause and effect under some standard, normal or ideal conditions, using correlation data or difference-makers in the comparison of such data.

By being aware of how scientific methodology and practice is influenced by ontological and epistemological assumptions from philosophy, we can empower clinicians and other health practitioners to engage critically in the development of their own profession. Our experience is that medical professionals appreciate learning more about philosophy of science in this way, which is also why Elena Rocca and I established the Centre for Applied Philosophy of Science at NMBU.

LB: As you know, at Imperfect Cognitions we have a special interest in mental health. What notion of causation do you think captures the complexities of mental health challenges?

RLA: We started from the problem of medically unexplained symptoms, that are notoriously challenging to treat within the biomedical model. They are also not your typical one cause, one effect conditions, but have a mix of physical and mental causes and symptoms, often in a unique combination for each patient. After a year or so on the project, someone challenged me on our interest in these conditions, and said that all conditions are a mix of mental and physical causes and symptoms. Most illnesses also come in combinations with others, so-called co-morbidity, so a problem is that all of medicine is divided according to the Cartesian dualism: physical versus mental health.

A dispositionalist notion of cause will give a much more holist starting point for understanding health and illness, and the clinicians that we work with are often phenomenologists but therefore also sceptical of causal talk. This is an aversion that we try to cure in CauseHealth. From our perspective, genuine holism cannot be treated as a multifactorial matter. Instead, one must start by talking to the patient and finding out more about them and their story. Most of the causally relevant information will come from their medical history, biography, life situation, diet, genetics and so on. The medical intervention is only one single factor that will interact with this vast complexity.

LB: Your project has been genuinely interdisciplinary. What have been the advantages of interacting and collaborating with people from different backgrounds?

RLA: I have learned that all disciplines and professions use causal vocabulary in different ways. 'Causal mechanism' means something very different in medicine than in molecular biology, for instance. In medicine, one thinks of mechanisms as reductionist and determinist, based on lab research on animal models. This is why 'mechanistic evidence' ranks so low in evidence based medicine. I have now started to talk about causal theories instead of causal mechanisms.

Tuesday, 16 July 2019

Blended Memory

Tim Fawns is a Fellow in Clinical Education and Deputy Programme Director of the MSc Clinical Education at Edinburgh Medical School at the University of Edinburgh. He received his PhD from the University of Edinburgh in 2017, and his primary research interests are memory, digital technology and education. In this post, he discusses themes from his recent paper "Blended memory: A framework for understanding distributed autobiographical remembering with photography" in Memory Studies.

Recording live music on mobile phones, posting photos of breakfast on social media, taking the same photo six times when a friend with a better camera has already taken it... these are some of the many idiosyncratic photography practices I have encountered during my research into memory and photography, alongside traditional examples of family and holiday pictures.

From reading literature from cultural studies, media studies, and human computer interaction, followed by lots of informal conversations and, finally, a series of research interviews, it became clear to me that photography is an eccentric enterprise, and its relationship to how we remember our lives is highly complex. My research paints a very different picture from many cognitive psychology studies, where participants are, for example, shown a photograph (often, one that they have not taken themselves) and asked to recall something specific (e.g. a story or an event or a detail).

Controlled studies are often aimed at understanding the underlying mechanisms of memory or the effects of an intervention (e.g. using a photograph as a cue) on recall or recognition. I came to realise that photographs are not simply cues, and remembering with photography is not just looking at a photograph and then remembering. Practices of photography (taking photos, looking at them, organising them, sharing them with others) and the meanings we associate with our pictures are an integral part of the process of remembering. 

Thursday, 11 July 2019

Responsible Brains

Today's post is by Katrina Sifferd (pictured below). She holds a Ph.D. in philosophy from King’s College London, and is Professor and Chair of Philosophy at Elmhurst College. After leaving King’s, Katrina held a post-doctoral position as Rockefeller Fellow in Law and Public Policy and Visiting Professor at Dartmouth College. Before becoming a philosopher, Katrina earned a Juris Doctorate and worked as a senior research analyst on criminal justice projects for the National Institute of Justice.

Many thanks to Lisa for her kind invitation to introduce our recently published book, Responsible Brains: Neuroscience, Law, and Human Culpability. Bill Hirstein, Tyler Fagan, and I, who are philosophers at Elmhurst College, researched and wrote the book with the support of a Templeton sub-grant from the Philosophy and Science of Self-Control Project managed by Al Mele at Florida State University.

Responsible Brains joins a larger discussion about the ways evidence generated by the brain sciences can inform responsibility judgments. Can data about the brain help us determine who is responsible, and for which actions? Our book answers with resounding “yes” – but of course, the devil is in the details. To convince readers that facts about brains bear on facts about responsibility, we must determine which mental capacities are necessary to responsible agency, and which facts about brains are relevant to those capacities.

In Responsible Brains we argue that folk conceptions of responsibility, which underpin our shared practices of holding others morally and legally responsible, implicitly refer to a suite of cognitive known to the neuropsychological field as executive functions. We contend that executive functions – such as attentional control, planning, inhibition, and task switching – can ground a reasons-responsiveness account of responsibility, including sensitivity to moral or legal reasons and the volitional control to act in accordance with those reasons. A simplified statement of our theory is that persons must have a “minimal working set” (MWS) of executive functions to be responsible for their actions; if they lack a MWS, they are not (fully) responsible.

Some scholars claim that our sort of project goes too far. Stephen Morse, for example, worries that neurolaw researchers get carried away by their enthusiasm for seductive fMRI images and buzzy breakthroughs, leading them to apply empirical findings incautiously and overestimate their true relevance (thereby succumbing to “brain overclaim syndrome”). Other scholars, who think neuroscientific evidence undermines folk concepts crucial to responsibility judgments (like free will), may think we don’t go far enough. We remain confident in our moderate position: Neuroscience is relevant to responsibility judgements; it is largely compatible with our folk psychological concepts; and it can be used to clarify and “clean up” such concepts.

Because the criminal law is a repository of folk psychological judgments and concepts about responsibility, we often test and apply our theory using criminal cases. For instance, we find support for our account in the fact that the mental disorder most likely to ground successful legal insanity pleas is schizophrenia. Most associate this disorder with false beliefs about the world generated by hallucinations and delusions, but—crucially—persons with schizophrenia may also have severely diminished executive functions, resulting in an inability to identify and correct those false beliefs. Such persons are, by our lights, less than fully responsible. 

Tuesday, 9 July 2019

What does debiasing tell us about implicit bias?

Nick Byrd is a PhD candidate and Fellow at Florida State University, working in the Moral & Social Processing (a.k.a., Paul Conway) Lab in the Department of Psychology, and in the Experimental Philosophy Research Group in the Department of Philosophy at Florida State University. In this post, he introduces his paper “What we can (and can’t) infer about implicit bias from debiasing experiments”, recently published in Synthese.

Implicit bias is often described as associative, unconscious, and involuntary. However, philosophers of mind have started challenging these claims. Some of their reasons have to do with debiasing experiments. The idea is that if debiasing is not entirely involuntary and unconscious, then implicit bias is not entirely involuntary and unconscious.

Sure enough, some evidence suggests that debiasing is not entirely involuntary and unconscious (e.g., Devine, Forscher, Austin, & Cox, 2012). So it seems that implicit bias can be conscious and voluntary after all—i.e., it can be reflective.

Now, why would philosophers think that debiasing is not associative? I worry that this non-associationism rests on a couple mistakes.

First, there is a philosophical mistake; it’s what I call the any-only mixup (Section 0 of the paper): the mistake of concluding that a phenomena is not predicated on any instances of a particular process when the evidence merely shows that the phenomena is not predicated on only instances of that particular process.

The second mistake is more empirical. It is the mistake of overestimating evidence. As you may know, the open science movement has been reshaping psychological science for years. Part of this movement aims to improve the power of its studies to find truly positive results by, among other things, increasing the sample size of experiments and taking statistical significance more seriously.

Thursday, 4 July 2019

Regard for Reason in the Moral Mind

This post is by Josh May, Associate Professor of Philosophy at the University of Alabama at Birmingham. He presents his book, Regard for Reason in the Moral Mind (OUP, 2018). May’s research lies primarily at the intersection of ethics and science. He received his PhD in philosophy from the University of California, Santa Barbara in 2011. Before taking a position at UAB, he spent 2 years teaching at Monash University in Melbourne, Australia.

My book is a scientifically-informed examination of moral judgment and moral motivation that ultimately argues for what I call optimistic rationalism, which contains empirical and normative theses. The empirical thesis is a form of (psychological) rationalism, which asserts that moral judgment and motivation are fundamentally driven by reasoning or inference. The normative thesis is cautiously optimistic, claiming that moral cognition and motivation are, in light of the science, in pretty good shape---at least, the empirical evidence doesn’t warrant sweeping debunking of either core aspect of the moral mind.

There are two key maneuvers I make to support these theses. First, we must recognize that reasoning/inference often occurs unconsciously. Many of our moral judgments are automatic and intuitive, but we shouldn’t conclude that they are driven merely by gut feelings, just because consciousdeliberation didn’t precede the judgment. Even with the replication crisis, the science clearly converges on the idea that most of our mental lives involve complex computation that isn’t always accessible to introspection and that heavily influences behavior. As it goes for judgments of geography, mathematics, and others’ mental states, so it goes for moral judgment. Indeed, the heart of the rationalist position is that moral cognition isn’t special in requiring emotion (conceived as distinct from reason), compared to beliefs about other topics. In the end, the reason/emotion dichotomy is dubious, but that supports the rationalist position, not sentimentalism.

Second, I argue that what influences our moral minds often looks irrelevant or extraneous at first glance but is less problematic upon further inspection. Sometimes the issue is that irrelevant factors hardly influence our moral thoughts or motivations once one digs into the details of the studies. For example, meta-analyses of framing effects and incidental feelings of disgust suggest they at best exert a small influence on a minority of our moral choices. Of course, some factors do substantially influence us but a proper understanding of them reveals that they’re morally relevant. For example, Greene distrusts our commonsense moral judgments that conflict with utilitarianism because they’re influenced by whether a harm is “prototypically violent.” But it turns out that involves harming actively, using personal contact, and as a means to an end, which together form a morally relevant factor; it’s not merely an aversion to pushing. Similarly, the well-established bystander effect shows that helping behavior is motivated by whether one perceives there to be any help necessary, but that’s a morally relevant consideration (contra Doris). After examining many kinds of influences, I build on some other work with Victor Kumar to develop a kind of dilemma for those who seek to empirically debunk many of our moral thoughts or motivations: the purportedly problematic influences are often either substantial or morally irrelevant but rarely both.

Tuesday, 2 July 2019

Autonomy in Mood Disorders

Today's post is by Elliot Porter. Elliot is a political philosopher. His research examines autonomy and abnormal psychology, focusing particularly on affective disorders. During his MSc he sat as the student Mental Health Officer on Glasgow University's Students’ Representative Council, and the university’s Disability Equality Group. He currently sits as a member of a Research Ethics Committee in Glasgow, which approves medical research for the Health Research Authority.


It is widely thought that serious mental disorder can injure a person's autonomy. Beauchamp and Childress list mental disorder among the controlling influences that render a person non-autonomous. Neither Raz nor Dworkin allow their theories to conclude that people with mental disorder are in fact autonomous. 

Happily, recent research tends not to take mental disorder as a homogeneous phenomenon, in favour of examining different disorders and symptoms individually. Lisa Bortolotti has examined the relationship between delusion and autonomy in detail. Lubomira Radoilska has characterised depression it as a state which injures autonomy by taking away our agential power. Both have sought to explain how and why these kind of mental disorder injure our autonomy. I am interested in taking a different approach.

During my MSc I looked at various kinds of mental disorder and examined the commitments that three theories of autonomy would have for each. As one would expect, different theories are committed to different judgements in certain cases, and turn on different features of a disorder. What was striking was the degree of detail required before a theory could safely compel a conclusion. 

Judgements about autonomy in psychiatry are made on a case-by-case basis, where this degree of detail is available. However, just as clinical decisions are better informed by understanding the common side-effects of treatments or diseases, these kinds of moral judgement will be better informed by knowing what sorts of moral implications different disorders have. We must be able to recognise which kinds of depression threaten autonomy and which (if any) do not. Clinicians can make safer judgements, and patients’ rights are more secure, if these individual judgements are informed by a systematic understanding of how different disorders interact with autonomy.