Skip to main content

Responsible Brains

Today's post is by Katrina Sifferd (pictured below). She holds a Ph.D. in philosophy from King’s College London, and is Professor and Chair of Philosophy at Elmhurst College. After leaving King’s, Katrina held a post-doctoral position as Rockefeller Fellow in Law and Public Policy and Visiting Professor at Dartmouth College. Before becoming a philosopher, Katrina earned a Juris Doctorate and worked as a senior research analyst on criminal justice projects for the National Institute of Justice.



Many thanks to Lisa for her kind invitation to introduce our recently published book, Responsible Brains: Neuroscience, Law, and Human Culpability. Bill Hirstein, Tyler Fagan, and I, who are philosophers at Elmhurst College, researched and wrote the book with the support of a Templeton sub-grant from the Philosophy and Science of Self-Control Project managed by Al Mele at Florida State University.

Responsible Brains joins a larger discussion about the ways evidence generated by the brain sciences can inform responsibility judgments. Can data about the brain help us determine who is responsible, and for which actions? Our book answers with resounding “yes” – but of course, the devil is in the details. To convince readers that facts about brains bear on facts about responsibility, we must determine which mental capacities are necessary to responsible agency, and which facts about brains are relevant to those capacities.

In Responsible Brains we argue that folk conceptions of responsibility, which underpin our shared practices of holding others morally and legally responsible, implicitly refer to a suite of cognitive known to the neuropsychological field as executive functions. We contend that executive functions – such as attentional control, planning, inhibition, and task switching – can ground a reasons-responsiveness account of responsibility, including sensitivity to moral or legal reasons and the volitional control to act in accordance with those reasons. A simplified statement of our theory is that persons must have a “minimal working set” (MWS) of executive functions to be responsible for their actions; if they lack a MWS, they are not (fully) responsible.

Some scholars claim that our sort of project goes too far. Stephen Morse, for example, worries that neurolaw researchers get carried away by their enthusiasm for seductive fMRI images and buzzy breakthroughs, leading them to apply empirical findings incautiously and overestimate their true relevance (thereby succumbing to “brain overclaim syndrome”). Other scholars, who think neuroscientific evidence undermines folk concepts crucial to responsibility judgments (like free will), may think we don’t go far enough. We remain confident in our moderate position: Neuroscience is relevant to responsibility judgements; it is largely compatible with our folk psychological concepts; and it can be used to clarify and “clean up” such concepts.




Because the criminal law is a repository of folk psychological judgments and concepts about responsibility, we often test and apply our theory using criminal cases. For instance, we find support for our account in the fact that the mental disorder most likely to ground successful legal insanity pleas is schizophrenia. Most associate this disorder with false beliefs about the world generated by hallucinations and delusions, but—crucially—persons with schizophrenia may also have severely diminished executive functions, resulting in an inability to identify and correct those false beliefs. Such persons are, by our lights, less than fully responsible. 

We also argue that, because executive functions are still developing into the mid-20’s, many jurisdictions (especially in the US) are incorrect to apply full responsibility to persons who commit serious crimes at 16 and 17. We claim persons should not be presumed to have full responsibility until they are 21.

Finally, let me touch upon what I think are two of the most interesting features of our theory. First, we have a close competitor, the theory of responsibility articulated by Neil Levy. Neil claims that consciousness is the cognitive feature that matters to reasons-responsiveness and thus to responsibility. We argue that he is almost right—but that it only looks like consciousness is what matters because executive functions are usually engaged during periods of conscious thought. As we show in the book, in rare cases the two come apart: a person without consciousness but with executive functioning is responsible, and a person with consciousness but without executive functioning is not.

Second, we claim that punishing criminals with long sentences of incarceration may actually increase crime, because such sentences can degrade the very cognitive functions needed for law-abiding behavior. In the US, prison is a violent and impoverished environment that often denies prisoners any meaningful choice-making. As a result, long stays in prison can diminish executive functioning. which is problematic: we ought not to be punishing persons in a way that hinders them as responsible agents once they are released.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph