Thursday, 18 July 2019

CauseHealth: An Interview with Rani Lill Anjum

Today I interview Rani Lill Anjum on her exciting project CauseHealth. Rani works as a philosopher of science at the Norwegian University of Life Sciences (NMBU) and is the Director of the Centre for Applied Philosophy of Science (CAPS), always at NMBU.

LB: How did you first become interested in causation in the health sciences?

RLA: I started thinking about causation in medicine back in 2011, when I was working on my research project Causation in Science. Many of my collaborators already had an interest in philosophy of medicine, and I started thinking that if causation was complicated in physics, biology, psychology and social science, then medicine must be the biggest challenge. After all, a person is the unity of them all, as physiological, biological, mental and social beings. Also, our health is causally influenced by or even the result of what happens to us at all these levels.

LB: What would you describe as the main finding of CauseHealth now that it is drawing to a close, and what do you expect its implications to be?

RLA: In the beginning, I didn't know very much about medicine or philosophy of medicine, so I had some naïve idea about who the target group from the health profession would be. Now I understand why we have met most enthusiasm from the clinicians, since they are the ones working with individual patients. In the last year of CauseHealth, we have therefore worked more toward clinicians, especially those who feel a squeeze between the public health agenda of evidence based medicine and the clinical needs of their individual patients.

In public health, the aim of medical research is to say something general to the population, typically based on statistics from clinical studies. However, in the clinic, one will also meet patients who are not represented in the clinical trials. In CauseHealth, we have emphasised a dispositionalist understanding of causation, as it was developed by myself and Stephen Mumford in Getting Causes from Powers (OUP 2011).

Here we argue that causation is essentially complex, context-sensitive, singular and intrinsic. In medicine, this translates to
  • genuine complexity rather than mono-causal models
  • heterogeneity instead of homogeneity
  • medical uniqueness rather than normal or average measures
  • intrinsic dispositions rather than statistical correlations.

This is very different from what one would get from other theories of causation, especially empiricist theories such as the regularity theory of David Hume or the counterfactual theory of David Lewis. Scientific methodology, however, actually relies heavily on these notions of causation, in the consistent search for regularities of cause and effect under some standard, normal or ideal conditions, using correlation data or difference-makers in the comparison of such data.

By being aware of how scientific methodology and practice is influenced by ontological and epistemological assumptions from philosophy, we can empower clinicians and other health practitioners to engage critically in the development of their own profession. Our experience is that medical professionals appreciate learning more about philosophy of science in this way, which is also why Elena Rocca and I established the Centre for Applied Philosophy of Science at NMBU.

LB: As you know, at Imperfect Cognitions we have a special interest in mental health. What notion of causation do you think captures the complexities of mental health challenges?

RLA: We started from the problem of medically unexplained symptoms, that are notoriously challenging to treat within the biomedical model. They are also not your typical one cause, one effect conditions, but have a mix of physical and mental causes and symptoms, often in a unique combination for each patient. After a year or so on the project, someone challenged me on our interest in these conditions, and said that all conditions are a mix of mental and physical causes and symptoms. Most illnesses also come in combinations with others, so-called co-morbidity, so a problem is that all of medicine is divided according to the Cartesian dualism: physical versus mental health.

A dispositionalist notion of cause will give a much more holist starting point for understanding health and illness, and the clinicians that we work with are often phenomenologists but therefore also sceptical of causal talk. This is an aversion that we try to cure in CauseHealth. From our perspective, genuine holism cannot be treated as a multifactorial matter. Instead, one must start by talking to the patient and finding out more about them and their story. Most of the causally relevant information will come from their medical history, biography, life situation, diet, genetics and so on. The medical intervention is only one single factor that will interact with this vast complexity.

LB: Your project has been genuinely interdisciplinary. What have been the advantages of interacting and collaborating with people from different backgrounds?

RLA: I have learned that all disciplines and professions use causal vocabulary in different ways. 'Causal mechanism' means something very different in medicine than in molecular biology, for instance. In medicine, one thinks of mechanisms as reductionist and determinist, based on lab research on animal models. This is why 'mechanistic evidence' ranks so low in evidence based medicine. I have now started to talk about causal theories instead of causal mechanisms.

Tuesday, 16 July 2019

Blended Memory

Tim Fawns is a Fellow in Clinical Education and Deputy Programme Director of the MSc Clinical Education at Edinburgh Medical School at the University of Edinburgh. He received his PhD from the University of Edinburgh in 2017, and his primary research interests are memory, digital technology and education. In this post, he discusses themes from his recent paper "Blended memory: A framework for understanding distributed autobiographical remembering with photography" in Memory Studies.

Recording live music on mobile phones, posting photos of breakfast on social media, taking the same photo six times when a friend with a better camera has already taken it... these are some of the many idiosyncratic photography practices I have encountered during my research into memory and photography, alongside traditional examples of family and holiday pictures.

From reading literature from cultural studies, media studies, and human computer interaction, followed by lots of informal conversations and, finally, a series of research interviews, it became clear to me that photography is an eccentric enterprise, and its relationship to how we remember our lives is highly complex. My research paints a very different picture from many cognitive psychology studies, where participants are, for example, shown a photograph (often, one that they have not taken themselves) and asked to recall something specific (e.g. a story or an event or a detail).

Controlled studies are often aimed at understanding the underlying mechanisms of memory or the effects of an intervention (e.g. using a photograph as a cue) on recall or recognition. I came to realise that photographs are not simply cues, and remembering with photography is not just looking at a photograph and then remembering. Practices of photography (taking photos, looking at them, organising them, sharing them with others) and the meanings we associate with our pictures are an integral part of the process of remembering. 

Thursday, 11 July 2019

Responsible Brains

Today's post is by Katrina Sifferd (pictured below). She holds a Ph.D. in philosophy from King’s College London, and is Professor and Chair of Philosophy at Elmhurst College. After leaving King’s, Katrina held a post-doctoral position as Rockefeller Fellow in Law and Public Policy and Visiting Professor at Dartmouth College. Before becoming a philosopher, Katrina earned a Juris Doctorate and worked as a senior research analyst on criminal justice projects for the National Institute of Justice.

Many thanks to Lisa for her kind invitation to introduce our recently published book, Responsible Brains: Neuroscience, Law, and Human Culpability. Bill Hirstein, Tyler Fagan, and I, who are philosophers at Elmhurst College, researched and wrote the book with the support of a Templeton sub-grant from the Philosophy and Science of Self-Control Project managed by Al Mele at Florida State University.

Responsible Brains joins a larger discussion about the ways evidence generated by the brain sciences can inform responsibility judgments. Can data about the brain help us determine who is responsible, and for which actions? Our book answers with resounding “yes” – but of course, the devil is in the details. To convince readers that facts about brains bear on facts about responsibility, we must determine which mental capacities are necessary to responsible agency, and which facts about brains are relevant to those capacities.

In Responsible Brains we argue that folk conceptions of responsibility, which underpin our shared practices of holding others morally and legally responsible, implicitly refer to a suite of cognitive known to the neuropsychological field as executive functions. We contend that executive functions – such as attentional control, planning, inhibition, and task switching – can ground a reasons-responsiveness account of responsibility, including sensitivity to moral or legal reasons and the volitional control to act in accordance with those reasons. A simplified statement of our theory is that persons must have a “minimal working set” (MWS) of executive functions to be responsible for their actions; if they lack a MWS, they are not (fully) responsible.

Some scholars claim that our sort of project goes too far. Stephen Morse, for example, worries that neurolaw researchers get carried away by their enthusiasm for seductive fMRI images and buzzy breakthroughs, leading them to apply empirical findings incautiously and overestimate their true relevance (thereby succumbing to “brain overclaim syndrome”). Other scholars, who think neuroscientific evidence undermines folk concepts crucial to responsibility judgments (like free will), may think we don’t go far enough. We remain confident in our moderate position: Neuroscience is relevant to responsibility judgements; it is largely compatible with our folk psychological concepts; and it can be used to clarify and “clean up” such concepts.

Because the criminal law is a repository of folk psychological judgments and concepts about responsibility, we often test and apply our theory using criminal cases. For instance, we find support for our account in the fact that the mental disorder most likely to ground successful legal insanity pleas is schizophrenia. Most associate this disorder with false beliefs about the world generated by hallucinations and delusions, but—crucially—persons with schizophrenia may also have severely diminished executive functions, resulting in an inability to identify and correct those false beliefs. Such persons are, by our lights, less than fully responsible. 

Tuesday, 9 July 2019

What does debiasing tell us about implicit bias?

Nick Byrd is a PhD candidate and Fellow at Florida State University, working in the Moral & Social Processing (a.k.a., Paul Conway) Lab in the Department of Psychology, and in the Experimental Philosophy Research Group in the Department of Philosophy at Florida State University. In this post, he introduces his paper “What we can (and can’t) infer about implicit bias from debiasing experiments”, recently published in Synthese.

Implicit bias is often described as associative, unconscious, and involuntary. However, philosophers of mind have started challenging these claims. Some of their reasons have to do with debiasing experiments. The idea is that if debiasing is not entirely involuntary and unconscious, then implicit bias is not entirely involuntary and unconscious.

Sure enough, some evidence suggests that debiasing is not entirely involuntary and unconscious (e.g., Devine, Forscher, Austin, & Cox, 2012). So it seems that implicit bias can be conscious and voluntary after all—i.e., it can be reflective.

Now, why would philosophers think that debiasing is not associative? I worry that this non-associationism rests on a couple mistakes.

First, there is a philosophical mistake; it’s what I call the any-only mixup (Section 0 of the paper): the mistake of concluding that a phenomena is not predicated on any instances of a particular process when the evidence merely shows that the phenomena is not predicated on only instances of that particular process.

The second mistake is more empirical. It is the mistake of overestimating evidence. As you may know, the open science movement has been reshaping psychological science for years. Part of this movement aims to improve the power of its studies to find truly positive results by, among other things, increasing the sample size of experiments and taking statistical significance more seriously.

Thursday, 4 July 2019

Regard for Reason in the Moral Mind

This post is by Josh May, Associate Professor of Philosophy at the University of Alabama at Birmingham. He presents his book, Regard for Reason in the Moral Mind (OUP, 2018). May’s research lies primarily at the intersection of ethics and science. He received his PhD in philosophy from the University of California, Santa Barbara in 2011. Before taking a position at UAB, he spent 2 years teaching at Monash University in Melbourne, Australia.

My book is a scientifically-informed examination of moral judgment and moral motivation that ultimately argues for what I call optimistic rationalism, which contains empirical and normative theses. The empirical thesis is a form of (psychological) rationalism, which asserts that moral judgment and motivation are fundamentally driven by reasoning or inference. The normative thesis is cautiously optimistic, claiming that moral cognition and motivation are, in light of the science, in pretty good shape---at least, the empirical evidence doesn’t warrant sweeping debunking of either core aspect of the moral mind.

There are two key maneuvers I make to support these theses. First, we must recognize that reasoning/inference often occurs unconsciously. Many of our moral judgments are automatic and intuitive, but we shouldn’t conclude that they are driven merely by gut feelings, just because consciousdeliberation didn’t precede the judgment. Even with the replication crisis, the science clearly converges on the idea that most of our mental lives involve complex computation that isn’t always accessible to introspection and that heavily influences behavior. As it goes for judgments of geography, mathematics, and others’ mental states, so it goes for moral judgment. Indeed, the heart of the rationalist position is that moral cognition isn’t special in requiring emotion (conceived as distinct from reason), compared to beliefs about other topics. In the end, the reason/emotion dichotomy is dubious, but that supports the rationalist position, not sentimentalism.

Second, I argue that what influences our moral minds often looks irrelevant or extraneous at first glance but is less problematic upon further inspection. Sometimes the issue is that irrelevant factors hardly influence our moral thoughts or motivations once one digs into the details of the studies. For example, meta-analyses of framing effects and incidental feelings of disgust suggest they at best exert a small influence on a minority of our moral choices. Of course, some factors do substantially influence us but a proper understanding of them reveals that they’re morally relevant. For example, Greene distrusts our commonsense moral judgments that conflict with utilitarianism because they’re influenced by whether a harm is “prototypically violent.” But it turns out that involves harming actively, using personal contact, and as a means to an end, which together form a morally relevant factor; it’s not merely an aversion to pushing. Similarly, the well-established bystander effect shows that helping behavior is motivated by whether one perceives there to be any help necessary, but that’s a morally relevant consideration (contra Doris). After examining many kinds of influences, I build on some other work with Victor Kumar to develop a kind of dilemma for those who seek to empirically debunk many of our moral thoughts or motivations: the purportedly problematic influences are often either substantial or morally irrelevant but rarely both.