Thursday 31 December 2015

Transparency in Belief and Self-Knowledge

In this post I report on the Teorema sponsored workshop on Transparency in Belief and Self-Knowledge, held at the University of Oviedo (pictured below) on 9th and 10th November 2015, organized by Luis M. Valdés-Villanueva. Below I summarise the talks given by Sarah Sawyer, Miriam McCormick, José Zalabardo, and Jordi Fernández

In her talk ‘Contrastivism and Anti-Individualism’ Sawyer argued that contrastive self-knowledge entails externalism about mental content. According to contrastivism about knowledge, saying that a subject S knows a proposition p, is elliptical for saying that S knows that p rather than that q. Understanding this requires the positing of a positive contrast class (the set of propositions in contrast to which S knows that p), and a negative contrast class (the set of propositions in contrast to which S does not know that p). Sawyer argued that internalism about mental content makes impossible a negative contrast class in the self-knowledge case, and so has it that self-knowledge is non-contrastive. This means that if self-knowledge is contrastive, that fact would entail the truth of externalism about mental content.

In her paper ‘The Contingency of Transparency’, McCormick argued that Transparency is not a conceptual truth (as proponents have held), and further, nor is it even a psychological fact in all cases of deliberative belief formation. McCormick adopted Nishi Shah’s characterization of Transparency as ‘when asking oneself whether to believe that p’ one must ‘immediately recognize that this question is settled by, and only by, answering the question whether p is true’ (Shah 2003: 447). According to Transparency, one cannot take non-alethic considerations as reasons for belief when deliberating over whether to believe some proposition. McCormick considered three cases which she presented as counterexamples to the Transparency thesis. She then considered three ways the Transparency theorist might understand these cases, and discussed how we might adjudicate between these contrary interpretations. She concluded with some implications and challenges for her claim that Transparency does not always characterize our deliberation over what to believe.

In his paper, ‘Pragmatism and Truth’, Zalabardo sought to flesh out the pragmatist position and differentiate his version of pragmatism from similar, competing views. Zalabardo's pragmatism makes central use of speakers’ attitudes of approval and disapproval directed towards the mental states of the speaker and others. Unlike Robert Brandom's pragmatism, Zalabardo’s view does not have the implication that one does not count as believing a content unless one is able and prepared to defend what one believes with reasons. This had direct implications for the previous discussions of Transparency. In contrast to Huw Price's pragmatism, for example, Zalabardo does not explain the difference between idealism and pragmatism by invoking multiple conceptions of representation. Instead we avoid pragmatism collapsing into idealism by denying the cogency of taking up an ‘external’ perspective on our cognitive practices in general, a kind of manoeuvre Zalabardo credited to Quine.

Tuesday 29 December 2015

A Functionalist Approach to the Concept of 'Delusion'

This post is by Gottfried Vosgerau, Professor of Philosophy at the University of Düsseldorf. Gottfried's research interests are in the philosophy and metaphysics of mind, neurophilosophy, and cognitive science. Here he summarises his recent paper, co-authored with Patrice Soom, 'A Functionalist Approach to the Concept of 'Delusion', published in Journal for Philosophy and Psychiatry.

Based on the widely accepted DSM definition of delusions, delusions are commonly held to be false beliefs about reality that are not shared by the community the subject lives in and that are sustained despite overwhelming counter-evidence. In our paper, we argue that this conceptualization cannot be used for a scientific investigation of delusions. For this purpose, we argue, delusions should be defined as mental states with asymmetric inferential profiles: While they have inferential impact on other mental states, they are not affected by other mental states (especially not affected in a way that would lead to revision). This definition can be nicely captured in functionalistic terms and summarized with the slogan that delusions are mental states that are immune to revision.

Here, we do not wish to repeat the arguments against the DSM definition, some of which are well known; they are based on examples that show that the definition is either too narrow or too wide. Neither, we do not want to go into the technical details of the functional definition. Instead, we would like to shortly discuss two broader points:

i) What is a 'scientific' definition of delusions and why do we need one?
ii) Why is a functional definition best suited?

The DSM is a manual used by both researchers and clinicians. For this reason, there is a multitude of constraints applying to the classification and the definition of mental disorders. 'Scientificity' or 'scientific adequacy' is only one of the possible criteria to be taken into account in order to define delusions. Others constraints include ease of applicability and reliability of daily diagnoses, political goals (e.g. securing that therapy costs are covered by insurances), social considerations (e.g. destigmatization), and therapeutic implications. All of these goals are equally important. However, finding a middle way between all these heterogeneous constraints comes with the risk of conflating different dimensions in the debate. For example, while it is socially and politically most significant to make a distinction between healthy and pathological, this dimension is of no big relevance for a scientific understanding of the mechanisms leading to this or that behaviour. 

Monday 28 December 2015

Meaning and Mental Illness

For our series of first-person accounts, Kitt O'Malley, blogger and mental health advocate, writes about her experience of altered states and what these mean to her.

When I was twenty-one upon returning from my grandfather’s memorial mass at which I gave the eulogy, I first experienced a series of altered mental states which I chose to interpret as God calling me to the ordained ministry. I questioned that sense of call due to my intellectual skepticism, my agnosticism, and the fact that I had a history of mental illness, namely major depression and dysthymia. God did not speak to me in my altered mental states. I heard no voices and saw no visions. The altered states I entered were sometimes ecstatic and sometimes tempting and dark. My interpretation of my experiences was influenced by my familiarity with the works of Alan Watts and D.T. Suzuki on Zen Buddhism, C.S. Lewis’ The Screwtape Letters, and Roman Catholic mystic saints.

As I received no definitive instructions, I didn’t know exactly what God called me to do, but I chose to identify with mystic saints and believed that God called me to seminary training. I did not pursue a seminary education at that time. Later when I was thirty, after being prescribed antidepressants, I experienced a week-long psychotic state in which simultaneous thoughts raced through my mind in binary (zeroes and ones), about chaos theory, and about Roman Catholic mystic saints. Even after the psychotic break, my diagnosis remained dysthymic, with the episode believed to be a reaction to antidepressant medication.

Thursday 24 December 2015

Mind, Body and Soul: Mental Health Nearing the End of Life

On 10th November 2015 the Royal Society of Medicine hosted a very interesting conference, entitled "Mind, Body and Soul: An update on psychiatric, philosophical and legal aspects of care nearing the end of life". Here is a report of the sessions I attended on the day.

In Session 1, Matthew Hotopf (King's College London) talked about his experience of treating people with depression in palliative care. Anti-depressants are effective with respect to placebos. People with strong suicidal ideas are in a difficult situation as they cannot be easily moved to psychiatric wards due to the special care they need. The important factor is to be able to contain risk of death by suicide and self-harm. Hotopf concluded by saying that it is normal to have extreme emotions near the end of life, and this does not mean that one suffers from a mental disorder.

Annabel Price, Consultant Psychiatrist at the Cambridge and Peterborough Foundation Mental Trust, pictured above, focused on issues surrounding the desire for death: How can it be measured? Does it change over time? Is treatment for depression affecting the desire for death? How should clinicians respond to the desire for death? Suicide is very widespread, and more common among the elderly, men, people with psychiatric disorders, and the unemployed. People with terminal illness are among the most vulnerable groups.

Evidence suggests that, although suicidal thoughts are very common in cancer patients, people desire death strongly after diagnosis, but then their desire often fades. A very small number of people with suicidal thoughts complete suicide (mostly they are elderly, male, socially isolated, and affected by substance abuse). There seems to be a strong link between desire for death and depression. A very interesting result is that most people who express a desire for death would not seek to end their lives via assisted suicide. Another interesting result of qualitative research is that expressing a desire for death sometimes can be a call for help, wanting carers to take one seriously and pay attention, and also a desire to regain control over one's own life, preserving self-determination.

Tuesday 22 December 2015

The Ethics of Delusion

This post is by Lisa Bortolotti. Here she reports on two recently published papers, co-written with Kengo Miyazono

Kengo and I have recently been interested in how the considerations raised in the philosophy of belief apply to delusions. In our review paper on Philosophy Compass (open access) we argue that the delusions literature has helped us focus on some key issues concerning the nature and development of beliefs. What conditions does a report need to satisfy in order to qualify as the report of a belief? What is the interaction between experience and inference in the process by which beliefs are formed?

Kengo and I also have a joint research paper that recently appeared in Erkenntnis (open access), where we ask what the ethics of belief can tell us about delusions. In this post I shall sum up our arguments in the paper, hoping for some feedback from our blog readers. There are several ways we can think of an ethics for belief. For instance, we could think that the fundamental epistemic norm is not to believe something for which there is no sufficient evidence. In that context, we could ask whether an agent is responsible for forming such a belief, and whether she should be blamed for it. Or we could think that the fundamental norm is to maximise epistemic value when adopting new beliefs, where epistemic value could be measured in terms of the ratio of true to false beliefs, epistemic utility, or an agent's epistemic virtue. Then, we would focus on the consequences of an agent adopting certain beliefs or following certain rules for the adoption of beliefs.

Our suggestion in the paper is that, no matter which approach we choose, it is not obvious that delusions as beliefs are ethically problematic. First, agents do not seem blameworthy for their delusional beliefs because, in the context in which delusions are formed, their ability to believe otherwise is significantly compromised due to reasoning impairments, biases, and motivational factors. From a deontological point of view, impairments, biases, and motivational factors prevent agents from adopting an alternative belief to the delusional one, and from recognising the epistemic shortcomings of their delusions.

Thursday 17 December 2015

Disturbed Consciousness

In this post, Rocco J. Gennaro (picture below) presents his forthcoming edited book titled 'Disturbed Consciousness: New Essays on Psychopathologies and Theories of Consciousness'.

My name is Rocco J. Gennaro. I am Professor of Philosophy and Philosophy Department Chair at the University of Southern Indiana in Evansville, Indiana, USA. My Ph.D. in philosophy is from Syracuse University in 1991. I moved from Indiana State University in Terre Haute (where I was for fourteen years) to the University of Southern Indiana in 2009.

My main area of specialty is philosophy of mind/cognitive science and consciousness, but I also have strong interests in metaphysics, ethics, and early modern history of philosophy. I have published seven books (as either sole author or editor) and numerous papers in these areas, often defending a version of the higher-order thought (HOT) theory of consciousness. I have also written on animal and infant consciousness, episodic memory, and have defended conceptualism.

Tuesday 15 December 2015

Conscious Control over Action

This post is by Joshua Shepherd (pictured above), a Wellcome Trust Research Fellow at the Oxford Uehiro Centre for Practical Ethics, and a Junior Research Fellow at Jesus College. Joshua's work concerns issues in the philosophy of mind, action, cognitive science, and practical ethics. In this post he discusses the role of conscious experience in the control of action, and summarises his recent paper 'Conscious Control over Action' published in Mind and Language. 

One question we might have concerns the kinds of causal contributions consciousness makes to action control. Another concerns a question regarding the relative importance of consciousness to action control. If consciousness is relatively unimportant, theorizing about ‘conscious control’ might be largely a waste of time. If consciousness is important, however, understanding the contributions of consciousness could be essential to a full understanding of the way we exercise control over our behaviour.

Although some philosophers and cognitive scientists have argued that consciousness is unimportant for action control, I argue in a recent paper that the opposite is probably true. The key is to see conscious processes as a part of a broader structure that enables action control, and to see where consciousness tends to fit into that structure. Consciousness certainly does not do everything for action control – but the things it does look to be important.

Here is an example of what I have in mind. Many have emphasized the fact that non-conscious visual processes appear to play an important role in structuring fine-grained elements of action control. Such processes contribute information to structures that enable features of action control like accurate shaping of grip size, or accurate tracking of action targets in the environment. Even if this is true, however, I argue that extended processes of action control often require not just fine-grained elements such as scaling one’s grip or tracking an action target.

Action control requires the maintenance and updating of action plans, the preparation of contingency plans in response to anticipated difficulties, and the flexible management of capacities such as attention. Action control requires, that is, not just implementational capacities of the sort non-conscious vision may support, but executive capacities. And it looks like consciousness plays important roles for the deployment of these executive capacities.

Monday 14 December 2015

PERFECT 2016: False but Useful Beliefs

Project PERFECT is very proud to announce its first workshop, on False but Useful Beliefs, to be held in London on 4th and 5th of February 2016. The workshop will take place at Regent's Conferences and Event in Regent's Park (see picture below). The idea of the workshop is to explore a variety of beliefs and belief-like states that are epistemically faulty (either false or badly supported by evidence) but that also play a useful function for the agent, either biologically, psychologically, pragmatically, epistemically, or in some other way.

The workshop features three types of talks.

1. Talks by invited speakers who are leading experts in the area. 

Anandi Hattiangadi from Stockholm University will talk about radical interpretation and implicit cognition, Neil Van Leeuwen from Georgia State University will discuss agent-like stimuli in religious practice, and David Papineau from King's College London and CUNY will ask whether functional falsity refutes teleosemantics.

2. Talks by an excellent mix of early- and mid-career philosophers from all over the world, selected via a call-for-papers earlier in 2015

Jesse Summers from Duke University will talk about the benefits of rationalisation, Lubomira Radoilska from the University of Kent will ask whether false beliefs are conducive to agential success in a non-accidental way, Patrizia Pedrini from the University of Florence will discuss self-deception, David Kovacs  from Cornell University will talk about false but useful beliefs about ordinary objects, and Kate Nolfi from the University of Vermont will argue that there are epistemically faultless false beliefs.

3. Talks by project team members reporting on their progress with PERFECT. 

Ema Sullivan-Bissett (post-doc on PERFECT, working on belief) will talk about false but useful beliefs about epistemic normativity, and I (Lisa Bortolotti, PI on PERFECT) will ask whether positive illusions are epistemically innocent.

If you want to attend the conference, please register here by 15th January. Registration is heavily subsidised by the project, and at £30 (£20 for students and unemployed) it just covers lunch and refreshments over the two days of the workshop.

Hope to see many of you there!

Thursday 10 December 2015

MAP@Leeds Implicit Bias Workshop

On 15th-16th October 2015 the University of Leeds Minorities and Philosophy chapter hosted the MAP@Leeds Implicit Bias Conference. The conference included a large number of high quality talks covering a wide-range of issues relating to implicit bias. This report focuses on three of these talks.  

In her talk, “What do we want from a model of implicit bias?”, Jules Holroyd (pictured above) noted that competing models of implicit cognition have emerged from different sources, with different priorities and conceptual frameworks. She set out a framework for assessing these competing models. She set out some desiderata for a model of implicit cognition, set out some test cases, and considered some recent models of implicit bias in light of the desiderata and cases. She showed that models provided by Levy, Schwitzgebel, Mandelbaum, Gendler and Machery each fail to meet the desiderata. In developing this argument she provided a clear articulation of what an account of implicit cognition should do, bringing to light important cases that have been ignored in much discussion of implicit cognition and implicit bias.

In his talk, Ian James Kidd (pictured above) asked “Can We Retain Confidence in Philosophy in the Light of Implicit Bias?” He emphasised a number of different forms of confidence that might be threatened by knowledge of implicit bias: confidence in oneself as a philosopher; confidence in other philosophers, or philosophers as a collective; and confidence in philosophy’s agenda, heritage and future. He argued that the aggressive adversality in philosophy, which is slave to psychosocial biases, has the potential to be particularly damaging to each of these forms of confidence. Moreover, discoveries about implicit bias present a threat to the authority of reason and the idea that philosophers identify solid foundations for knowledge through the philosophical enterprise. However, Kidd argued that discoveries about implicit bias also have a positive impact: they highlight the advantages of an ancient vision of philosophy’s nature and purpose, according to which it involves identifying objects that prevent flourishing, and then identifying and implementing ameliorative strategies that facilitate flourishing. Philosophy, on this view, can be seen as improving understanding in order to facilitate the transformation of how people live. With regards to implicit bias, philosophy allows understanding of implicit bias and facilitates transformation of the ways in which people live that are related to implicit bias.

In their talk on “The Pragmatics of Inclusivity” Katharine Jenkins and Jennifer Saul (pictured above) focused on ways to improve philosophy teaching.  They argued that it is important to diversify syllabi, so that more of the ideas of members of minority groups are represented. However, they argued that additional action is required to successfully combat the negative effects of phenomena such as implicit bias and stereotype threat. They argued that it is necessary to emphasise the social group membership of members of minority groups where their work is taught, e.g. when teaching about the work of a Black female one might highlight her gender and racial group membership. They recognised, however, that statements such as “here is Joan Smith, she is Black” could produce unintended conservational implicature, suggesting to students that there is something negative about being female or Black. They argued that it is necessary to cancel this implicature by explicitly stating why it is that the social group membership of the individual is being emphasised, i.e. to combat the negative effects of biases. 

Congratulations to the organisers of this conference for bringing together so many excellent philosophers working at the cutting edge of this extremely interesting topic.  

Tuesday 8 December 2015

Decision-Making Capacity Incapacitated

This post is by André Martens, pictured above. Here André summarises his recent paper ‘Paternalism in Psychiatry: Anorexia Nervosa, Decision-Making Capacity, and Compulsory Treatment’, appearing in New Perspectives on Paternalism and Health Care edited by Thomas Schramme.

Currently, decision-making capacity (DMC) is intensively discussed in disciplines such as bioethics, philosophy of psychiatry, and psychology. Some authors regard it as (mental) competence. But what exactly is DMC? What are the mental preconditions of making genuine decisions? And what role does DMC play in ethics, especially regarding the normative status of treatment decisions of psychiatric patients with reduced, or even completely lacking DMC? In my paper I try to answer these questions.

Initially, I looked at the so-called traditional account of DMC, which is associated with the work of Paul S. Appelbaum and Thomas Grisso, among others. Here, DMC is formulated in terms of certain abilities, each being a necessary condition for the ascription of DMC. These abilities are:
  1. Understanding (of the factual information relevant for the focal decision),
  2. Appreciation (of the consequences and significance of the focal decision for one’s own life),
  3. Reasoning (being able to engage in reasoning processes such as weighing and comparing alternatives),
  4. Communication of Choice (DMC requires the ability to communicate one’s own choice).
This account focuses basically on cognitive abilities. And this is exactly the reason why it cannot account for a lack of DMC in some psychiatric disorders such as anorexia nervosa. Patients afflicted by severe forms of this potentially life-threatening eating disorder regularly achieve average or even above-average results in tests that operationalize the traditional account of DMC (e.g. the MacCAT-T). Nevertheless, and this is admittedly an intuitionist thesis, at least some instances of anorexic decision making appear to be ‘flawed’, for example, the refusal of life-saving treatments in terminal anorexia nervosa. Therefore, abilities not captured by the traditional account of DMC seem to be relevant as well. But which ones?

Inspired by the work of Jacinta Tan and Louis Charland, I defend the following thesis:

Inclusion thesis: Any full account of DMC must include at least one (explicit) evaluative or emotional element.

Monday 7 December 2015

Questioning Optimism

I'm Adam Harris and I'm an experimental psychologist from University College London.

I am perhaps an unusual contributor to the Imperfect Cognitions blog as I have argued that cognitions might seem imperfect because of imperfections in prevalent methodologies, predominantly arising from a failure to appreciate the importance of understanding the appropriate normative basis of a task. Specifically, my work has suggested that the assumed ubiquity of optimism across our species is based on questionable evidence.

A prominent example of this work is presented in a paper I wrote with Ulrike Hahn (published in the Psychological Review) in which we demonstrated, through simulation, that rational agents could be labelled as optimistic on the prevalent, comparative method of testing unrealistic optimism. On this method, participants respond to the question "Compared with the average student of your age and sex, how likely are you to..." where future life events are inserted for the ellipsis.

Responses are typically provided on a -3 (much less likely than the average) to +3 (much more likely) scale, where a response of zero represents 'about the same as average'. The logic of the test rests upon the recognition that if participants are accurately reporting their chances, then the average of their responses should be the average. Consequently, any deviation from zero is taken as indicative of a systematic bias. The oft-observed result that average responses on this scale to negative events are significantly negative is taken as evidence that, on average, the members of the group underestimate their relative chances. Because we do not wish to experience negative events, such a result is taken as evidence of optimism.

In Harris and Hahn (2011), we demonstrated that 3 statistical artifacts could generate the oft-observed pattern of results from rational Bayesian agents who are, by definition, unbiased. Such a result raises questions over the interpretation of the same pattern of results observed in human participants. There is no longer any evidence for bias on these tests if the pattern of results are consistent with those of rational agents. Essentially, these tests fail the major pre-requisite for a satisfactory test of bias: Unbiased agents appear biased!  In ongoing (as yet unpublished) research, we have failed to identify any evidence for optimism after controlling for these confounding artifacts (my website will be updated when these results are published).

In addition to specifically raising concerns over the understanding of comparative unrealistic optimism, this work highlights, generally, the importance of understanding what participants’ responses represent and the appropriate normative standard for those responses. In unrealistic optimism research, participants’ responses represent their understanding of their own risk and the average person’s risk. Normatively, their own risk includes their estimate of the base rate as well as any individuating information they possess. This insight is a critical consideration when evaluating conclusions from any measure designed to assess bias in risk estimates about real-world events (see also, Harris et al., 2013)

Fortunately, there is also an initial, easy, check for an optimism bias that cannot be accounted for on statistical grounds. The implications of any statistical account in terms of researching optimism are opposite for events of opposite valence. If, for example, a statistical account predicts lower responses for negative events, it will also predict lower responses for comparable positive events. Because of the reversed desirability of positive and negative events, however, the same direction of effect that constitutes optimism for one valence would constitute pessimism for the opposing valence.

Thus, the inclusion of both positive and negative events can serve as a first-stage litmus test to identify a possible confounding influence of statistical artifacts. I therefore recommend that researchers routinely include both positive and negative events in their tests of optimism. In my own work, all such tests to date have observed the same direction of effect in each valence. This constitutes seeming optimism in one valence and pessimism in the other, thus failing to provide the conclusive evidence required for optimism.

Thursday 3 December 2015

The 17th International Conference on Philosophy, Psychiatry and Psychology - INPP 2015

The 17th International Conference on Philosophy, Psychiatry and Psychology – International Network for Philosophy and Psychiatry INPP 2015 – with the topic ‘Why do humans become mentally ill? Anthropological, biological and cultural vulnerabilities of mental illness' - was held in Frutillar, Chile, on October 29th, 30th and 31th, 2015. The Conference has been organised by the Centro de Estudios de Fenomenología y Psiquiatría, Universidad Diego Portales, Santiago, Chile, in coordination with the International Network for Philosophy and Psychiatry, INPP, to promote and share cross-disciplinary research from the field of Philosophy and Mental Health. 

All the lectures and seminars were housed in Teatro del Lago (picture above) located on the lake in Chilean Patagonia, with an inspiring natural setting and stunning architecture. The programme consisted of 23 plenary conferences, 54 oral presentations, 6 panel discussions, and more than 30 posters of researchers coming from all over the world.

Here I summarize only a small sample of talks from this super interesting event.

Tuesday 1 December 2015

Bayesian Accounts and Black Swans

In this post Ryan McKay, summarises his recent paper 'Bayesian Accounts and Black Swans: Questioning the Erotetic Theory of Delusional Thinking'.

Matthew Parrott and Philipp Koralus (hereafter P&K) offer a fresh take on 'imperfect cognitions'. In their recent post they outline how their 'erotetic theory' can account for certain instances of fallible human reasoning. They illustrate this with an example about a fridge containing either beer or wine and cheese (I confess that I fell for the fallacy here; I presume my critical faculties were disarmed by my stomach).

My purpose in this brief post is not to contest their analysis of such examples, but to summarise my evaluation of their erotetic approach to delusional thinking, raising my own questions about their theory in the process.

The Core Claim

P&K’s core claim is that deluded individuals are less inquisitive than healthy individuals; in particular, deluded individuals are selectively deficient in raising endogenous questions, while having no problem raising or answering exogenous questions (which include 'default questions in response to external stimuli' as well as questions posed by others). However, without any rigorous way of distinguishing endogenous questions from exogenous questions, the hypothesis that deluded individuals are impaired in raising the former seems hard to falsify – any question that a deluded individual shows themselves capable of asking could be rationalised as 'externally stimulated', and thereby exogenous, after the fact. Meanwhile, the claim that deluded individuals 'would have no problem taking on board and answering questions that are put to [them] by someone else' (Parrott and Koralus 2015: 400) is already contradicted by available evidence, as some deluded individuals are completely impervious to external questioning (e.g., see Breen, Caine, and Coltheart 2002).
  • Q1) Given that P&K suggest that some questions a person asks are 'externally stimulated', and thereby exogenous, how can we reliably distinguish endogenous questions from exogenous questions?
  • Q2) If deluded individuals are selectively deficient in raising their own questions, why are they unable to fully utilize and retain questions that others raise?