Tuesday, 16 July 2019

Blended Memory

Tim Fawns is a Fellow in Clinical Education and Deputy Programme Director of the MSc Clinical Education at Edinburgh Medical School at the University of Edinburgh. He received his PhD from the University of Edinburgh in 2017, and his primary research interests are memory, digital technology and education. In this post, he discusses themes from his recent paper "Blended memory: A framework for understanding distributed autobiographical remembering with photography" in Memory Studies.


Recording live music on mobile phones, posting photos of breakfast on social media, taking the same photo six times when a friend with a better camera has already taken it... these are some of the many idiosyncratic photography practices I have encountered during my research into memory and photography, alongside traditional examples of family and holiday pictures.

From reading literature from cultural studies, media studies, and human computer interaction, followed by lots of informal conversations and, finally, a series of research interviews, it became clear to me that photography is an eccentric enterprise, and its relationship to how we remember our lives is highly complex. My research paints a very different picture from many cognitive psychology studies, where participants are, for example, shown a photograph (often, one that they have not taken themselves) and asked to recall something specific (e.g. a story or an event or a detail).

Controlled studies are often aimed at understanding the underlying mechanisms of memory or the effects of an intervention (e.g. using a photograph as a cue) on recall or recognition. I came to realise that photographs are not simply cues, and remembering with photography is not just looking at a photograph and then remembering. Practices of photography (taking photos, looking at them, organising them, sharing them with others) and the meanings we associate with our pictures are an integral part of the process of remembering. 

Thursday, 11 July 2019

Responsible Brains

Today's post is by Katrina Sifferd (pictured below). She holds a Ph.D. in philosophy from King’s College London, and is Professor and Chair of Philosophy at Elmhurst College. After leaving King’s, Katrina held a post-doctoral position as Rockefeller Fellow in Law and Public Policy and Visiting Professor at Dartmouth College. Before becoming a philosopher, Katrina earned a Juris Doctorate and worked as a senior research analyst on criminal justice projects for the National Institute of Justice.



Many thanks to Lisa for her kind invitation to introduce our recently published book, Responsible Brains: Neuroscience, Law, and Human Culpability. Bill Hirstein, Tyler Fagan, and I, who are philosophers at Elmhurst College, researched and wrote the book with the support of a Templeton sub-grant from the Philosophy and Science of Self-Control Project managed by Al Mele at Florida State University.

Responsible Brains joins a larger discussion about the ways evidence generated by the brain sciences can inform responsibility judgments. Can data about the brain help us determine who is responsible, and for which actions? Our book answers with resounding “yes” – but of course, the devil is in the details. To convince readers that facts about brains bear on facts about responsibility, we must determine which mental capacities are necessary to responsible agency, and which facts about brains are relevant to those capacities.

In Responsible Brains we argue that folk conceptions of responsibility, which underpin our shared practices of holding others morally and legally responsible, implicitly refer to a suite of cognitive known to the neuropsychological field as executive functions. We contend that executive functions – such as attentional control, planning, inhibition, and task switching – can ground a reasons-responsiveness account of responsibility, including sensitivity to moral or legal reasons and the volitional control to act in accordance with those reasons. A simplified statement of our theory is that persons must have a “minimal working set” (MWS) of executive functions to be responsible for their actions; if they lack a MWS, they are not (fully) responsible.

Some scholars claim that our sort of project goes too far. Stephen Morse, for example, worries that neurolaw researchers get carried away by their enthusiasm for seductive fMRI images and buzzy breakthroughs, leading them to apply empirical findings incautiously and overestimate their true relevance (thereby succumbing to “brain overclaim syndrome”). Other scholars, who think neuroscientific evidence undermines folk concepts crucial to responsibility judgments (like free will), may think we don’t go far enough. We remain confident in our moderate position: Neuroscience is relevant to responsibility judgements; it is largely compatible with our folk psychological concepts; and it can be used to clarify and “clean up” such concepts.




Because the criminal law is a repository of folk psychological judgments and concepts about responsibility, we often test and apply our theory using criminal cases. For instance, we find support for our account in the fact that the mental disorder most likely to ground successful legal insanity pleas is schizophrenia. Most associate this disorder with false beliefs about the world generated by hallucinations and delusions, but—crucially—persons with schizophrenia may also have severely diminished executive functions, resulting in an inability to identify and correct those false beliefs. Such persons are, by our lights, less than fully responsible. 

Tuesday, 9 July 2019

What does debiasing tell us about implicit bias?

Nick Byrd is a PhD candidate and Fellow at Florida State University, working in the Moral & Social Processing (a.k.a., Paul Conway) Lab in the Department of Psychology, and in the Experimental Philosophy Research Group in the Department of Philosophy at Florida State University. In this post, he introduces his paper “What we can (and can’t) infer about implicit bias from debiasing experiments”, recently published in Synthese.


Implicit bias is often described as associative, unconscious, and involuntary. However, philosophers of mind have started challenging these claims. Some of their reasons have to do with debiasing experiments. The idea is that if debiasing is not entirely involuntary and unconscious, then implicit bias is not entirely involuntary and unconscious.

Sure enough, some evidence suggests that debiasing is not entirely involuntary and unconscious (e.g., Devine, Forscher, Austin, & Cox, 2012). So it seems that implicit bias can be conscious and voluntary after all—i.e., it can be reflective.

Now, why would philosophers think that debiasing is not associative? I worry that this non-associationism rests on a couple mistakes.

First, there is a philosophical mistake; it’s what I call the any-only mixup (Section 0 of the paper): the mistake of concluding that a phenomena is not predicated on any instances of a particular process when the evidence merely shows that the phenomena is not predicated on only instances of that particular process.

The second mistake is more empirical. It is the mistake of overestimating evidence. As you may know, the open science movement has been reshaping psychological science for years. Part of this movement aims to improve the power of its studies to find truly positive results by, among other things, increasing the sample size of experiments and taking statistical significance more seriously.


Thursday, 4 July 2019

Regard for Reason in the Moral Mind

This post is by Josh May, Associate Professor of Philosophy at the University of Alabama at Birmingham. He presents his book, Regard for Reason in the Moral Mind (OUP, 2018). May’s research lies primarily at the intersection of ethics and science. He received his PhD in philosophy from the University of California, Santa Barbara in 2011. Before taking a position at UAB, he spent 2 years teaching at Monash University in Melbourne, Australia.




My book is a scientifically-informed examination of moral judgment and moral motivation that ultimately argues for what I call optimistic rationalism, which contains empirical and normative theses. The empirical thesis is a form of (psychological) rationalism, which asserts that moral judgment and motivation are fundamentally driven by reasoning or inference. The normative thesis is cautiously optimistic, claiming that moral cognition and motivation are, in light of the science, in pretty good shape---at least, the empirical evidence doesn’t warrant sweeping debunking of either core aspect of the moral mind.

There are two key maneuvers I make to support these theses. First, we must recognize that reasoning/inference often occurs unconsciously. Many of our moral judgments are automatic and intuitive, but we shouldn’t conclude that they are driven merely by gut feelings, just because consciousdeliberation didn’t precede the judgment. Even with the replication crisis, the science clearly converges on the idea that most of our mental lives involve complex computation that isn’t always accessible to introspection and that heavily influences behavior. As it goes for judgments of geography, mathematics, and others’ mental states, so it goes for moral judgment. Indeed, the heart of the rationalist position is that moral cognition isn’t special in requiring emotion (conceived as distinct from reason), compared to beliefs about other topics. In the end, the reason/emotion dichotomy is dubious, but that supports the rationalist position, not sentimentalism.

Second, I argue that what influences our moral minds often looks irrelevant or extraneous at first glance but is less problematic upon further inspection. Sometimes the issue is that irrelevant factors hardly influence our moral thoughts or motivations once one digs into the details of the studies. For example, meta-analyses of framing effects and incidental feelings of disgust suggest they at best exert a small influence on a minority of our moral choices. Of course, some factors do substantially influence us but a proper understanding of them reveals that they’re morally relevant. For example, Greene distrusts our commonsense moral judgments that conflict with utilitarianism because they’re influenced by whether a harm is “prototypically violent.” But it turns out that involves harming actively, using personal contact, and as a means to an end, which together form a morally relevant factor; it’s not merely an aversion to pushing. Similarly, the well-established bystander effect shows that helping behavior is motivated by whether one perceives there to be any help necessary, but that’s a morally relevant consideration (contra Doris). After examining many kinds of influences, I build on some other work with Victor Kumar to develop a kind of dilemma for those who seek to empirically debunk many of our moral thoughts or motivations: the purportedly problematic influences are often either substantial or morally irrelevant but rarely both.

Tuesday, 2 July 2019

Autonomy in Mood Disorders

Today's post is by Elliot Porter. Elliot is a political philosopher. His research examines autonomy and abnormal psychology, focusing particularly on affective disorders. During his MSc he sat as the student Mental Health Officer on Glasgow University's Students’ Representative Council, and the university’s Disability Equality Group. He currently sits as a member of a Research Ethics Committee in Glasgow, which approves medical research for the Health Research Authority.

  
 

It is widely thought that serious mental disorder can injure a person's autonomy. Beauchamp and Childress list mental disorder among the controlling influences that render a person non-autonomous. Neither Raz nor Dworkin allow their theories to conclude that people with mental disorder are in fact autonomous. 

Happily, recent research tends not to take mental disorder as a homogeneous phenomenon, in favour of examining different disorders and symptoms individually. Lisa Bortolotti has examined the relationship between delusion and autonomy in detail. Lubomira Radoilska has characterised depression it as a state which injures autonomy by taking away our agential power. Both have sought to explain how and why these kind of mental disorder injure our autonomy. I am interested in taking a different approach.

During my MSc I looked at various kinds of mental disorder and examined the commitments that three theories of autonomy would have for each. As one would expect, different theories are committed to different judgements in certain cases, and turn on different features of a disorder. What was striking was the degree of detail required before a theory could safely compel a conclusion. 

Judgements about autonomy in psychiatry are made on a case-by-case basis, where this degree of detail is available. However, just as clinical decisions are better informed by understanding the common side-effects of treatments or diseases, these kinds of moral judgement will be better informed by knowing what sorts of moral implications different disorders have. We must be able to recognise which kinds of depression threaten autonomy and which (if any) do not. Clinicians can make safer judgements, and patients’ rights are more secure, if these individual judgements are informed by a systematic understanding of how different disorders interact with autonomy.