Thursday 31 December 2015

Transparency in Belief and Self-Knowledge

In this post I report on the Teorema sponsored workshop on Transparency in Belief and Self-Knowledge, held at the University of Oviedo (pictured below) on 9th and 10th November 2015, organized by Luis M. Valdés-Villanueva. Below I summarise the talks given by Sarah Sawyer, Miriam McCormick, José Zalabardo, and Jordi Fernández

In her talk ‘Contrastivism and Anti-Individualism’ Sawyer argued that contrastive self-knowledge entails externalism about mental content. According to contrastivism about knowledge, saying that a subject S knows a proposition p, is elliptical for saying that S knows that p rather than that q. Understanding this requires the positing of a positive contrast class (the set of propositions in contrast to which S knows that p), and a negative contrast class (the set of propositions in contrast to which S does not know that p). Sawyer argued that internalism about mental content makes impossible a negative contrast class in the self-knowledge case, and so has it that self-knowledge is non-contrastive. This means that if self-knowledge is contrastive, that fact would entail the truth of externalism about mental content.

In her paper ‘The Contingency of Transparency’, McCormick argued that Transparency is not a conceptual truth (as proponents have held), and further, nor is it even a psychological fact in all cases of deliberative belief formation. McCormick adopted Nishi Shah’s characterization of Transparency as ‘when asking oneself whether to believe that p’ one must ‘immediately recognize that this question is settled by, and only by, answering the question whether p is true’ (Shah 2003: 447). According to Transparency, one cannot take non-alethic considerations as reasons for belief when deliberating over whether to believe some proposition. McCormick considered three cases which she presented as counterexamples to the Transparency thesis. She then considered three ways the Transparency theorist might understand these cases, and discussed how we might adjudicate between these contrary interpretations. She concluded with some implications and challenges for her claim that Transparency does not always characterize our deliberation over what to believe.

In his paper, ‘Pragmatism and Truth’, Zalabardo sought to flesh out the pragmatist position and differentiate his version of pragmatism from similar, competing views. Zalabardo's pragmatism makes central use of speakers’ attitudes of approval and disapproval directed towards the mental states of the speaker and others. Unlike Robert Brandom's pragmatism, Zalabardo’s view does not have the implication that one does not count as believing a content unless one is able and prepared to defend what one believes with reasons. This had direct implications for the previous discussions of Transparency. In contrast to Huw Price's pragmatism, for example, Zalabardo does not explain the difference between idealism and pragmatism by invoking multiple conceptions of representation. Instead we avoid pragmatism collapsing into idealism by denying the cogency of taking up an ‘external’ perspective on our cognitive practices in general, a kind of manoeuvre Zalabardo credited to Quine.

Tuesday 29 December 2015

A Functionalist Approach to the Concept of 'Delusion'

This post is by Gottfried Vosgerau, Professor of Philosophy at the University of Düsseldorf. Gottfried's research interests are in the philosophy and metaphysics of mind, neurophilosophy, and cognitive science. Here he summarises his recent paper, co-authored with Patrice Soom, 'A Functionalist Approach to the Concept of 'Delusion', published in Journal for Philosophy and Psychiatry.

Based on the widely accepted DSM definition of delusions, delusions are commonly held to be false beliefs about reality that are not shared by the community the subject lives in and that are sustained despite overwhelming counter-evidence. In our paper, we argue that this conceptualization cannot be used for a scientific investigation of delusions. For this purpose, we argue, delusions should be defined as mental states with asymmetric inferential profiles: While they have inferential impact on other mental states, they are not affected by other mental states (especially not affected in a way that would lead to revision). This definition can be nicely captured in functionalistic terms and summarized with the slogan that delusions are mental states that are immune to revision.

Here, we do not wish to repeat the arguments against the DSM definition, some of which are well known; they are based on examples that show that the definition is either too narrow or too wide. Neither, we do not want to go into the technical details of the functional definition. Instead, we would like to shortly discuss two broader points:

i) What is a 'scientific' definition of delusions and why do we need one?
ii) Why is a functional definition best suited?

The DSM is a manual used by both researchers and clinicians. For this reason, there is a multitude of constraints applying to the classification and the definition of mental disorders. 'Scientificity' or 'scientific adequacy' is only one of the possible criteria to be taken into account in order to define delusions. Others constraints include ease of applicability and reliability of daily diagnoses, political goals (e.g. securing that therapy costs are covered by insurances), social considerations (e.g. destigmatization), and therapeutic implications. All of these goals are equally important. However, finding a middle way between all these heterogeneous constraints comes with the risk of conflating different dimensions in the debate. For example, while it is socially and politically most significant to make a distinction between healthy and pathological, this dimension is of no big relevance for a scientific understanding of the mechanisms leading to this or that behaviour. 

Monday 28 December 2015

Meaning and Mental Illness

For our series of first-person accounts, Kitt O'Malley, blogger and mental health advocate, writes about her experience of altered states and what these mean to her.

When I was twenty-one upon returning from my grandfather’s memorial mass at which I gave the eulogy, I first experienced a series of altered mental states which I chose to interpret as God calling me to the ordained ministry. I questioned that sense of call due to my intellectual skepticism, my agnosticism, and the fact that I had a history of mental illness, namely major depression and dysthymia. God did not speak to me in my altered mental states. I heard no voices and saw no visions. The altered states I entered were sometimes ecstatic and sometimes tempting and dark. My interpretation of my experiences was influenced by my familiarity with the works of Alan Watts and D.T. Suzuki on Zen Buddhism, C.S. Lewis’ The Screwtape Letters, and Roman Catholic mystic saints.

As I received no definitive instructions, I didn’t know exactly what God called me to do, but I chose to identify with mystic saints and believed that God called me to seminary training. I did not pursue a seminary education at that time. Later when I was thirty, after being prescribed antidepressants, I experienced a week-long psychotic state in which simultaneous thoughts raced through my mind in binary (zeroes and ones), about chaos theory, and about Roman Catholic mystic saints. Even after the psychotic break, my diagnosis remained dysthymic, with the episode believed to be a reaction to antidepressant medication.

Thursday 24 December 2015

Mind, Body and Soul: Mental Health Nearing the End of Life

On 10th November 2015 the Royal Society of Medicine hosted a very interesting conference, entitled "Mind, Body and Soul: An update on psychiatric, philosophical and legal aspects of care nearing the end of life". Here is a report of the sessions I attended on the day.

In Session 1, Matthew Hotopf (King's College London) talked about his experience of treating people with depression in palliative care. Anti-depressants are effective with respect to placebos. People with strong suicidal ideas are in a difficult situation as they cannot be easily moved to psychiatric wards due to the special care they need. The important factor is to be able to contain risk of death by suicide and self-harm. Hotopf concluded by saying that it is normal to have extreme emotions near the end of life, and this does not mean that one suffers from a mental disorder.

Annabel Price, Consultant Psychiatrist at the Cambridge and Peterborough Foundation Mental Trust, pictured above, focused on issues surrounding the desire for death: How can it be measured? Does it change over time? Is treatment for depression affecting the desire for death? How should clinicians respond to the desire for death? Suicide is very widespread, and more common among the elderly, men, people with psychiatric disorders, and the unemployed. People with terminal illness are among the most vulnerable groups.

Evidence suggests that, although suicidal thoughts are very common in cancer patients, people desire death strongly after diagnosis, but then their desire often fades. A very small number of people with suicidal thoughts complete suicide (mostly they are elderly, male, socially isolated, and affected by substance abuse). There seems to be a strong link between desire for death and depression. A very interesting result is that most people who express a desire for death would not seek to end their lives via assisted suicide. Another interesting result of qualitative research is that expressing a desire for death sometimes can be a call for help, wanting carers to take one seriously and pay attention, and also a desire to regain control over one's own life, preserving self-determination.

Tuesday 22 December 2015

The Ethics of Delusion

This post is by Lisa Bortolotti. Here she reports on two recently published papers, co-written with Kengo Miyazono

Kengo and I have recently been interested in how the considerations raised in the philosophy of belief apply to delusions. In our review paper on Philosophy Compass (open access) we argue that the delusions literature has helped us focus on some key issues concerning the nature and development of beliefs. What conditions does a report need to satisfy in order to qualify as the report of a belief? What is the interaction between experience and inference in the process by which beliefs are formed?

Kengo and I also have a joint research paper that recently appeared in Erkenntnis (open access), where we ask what the ethics of belief can tell us about delusions. In this post I shall sum up our arguments in the paper, hoping for some feedback from our blog readers. There are several ways we can think of an ethics for belief. For instance, we could think that the fundamental epistemic norm is not to believe something for which there is no sufficient evidence. In that context, we could ask whether an agent is responsible for forming such a belief, and whether she should be blamed for it. Or we could think that the fundamental norm is to maximise epistemic value when adopting new beliefs, where epistemic value could be measured in terms of the ratio of true to false beliefs, epistemic utility, or an agent's epistemic virtue. Then, we would focus on the consequences of an agent adopting certain beliefs or following certain rules for the adoption of beliefs.

Our suggestion in the paper is that, no matter which approach we choose, it is not obvious that delusions as beliefs are ethically problematic. First, agents do not seem blameworthy for their delusional beliefs because, in the context in which delusions are formed, their ability to believe otherwise is significantly compromised due to reasoning impairments, biases, and motivational factors. From a deontological point of view, impairments, biases, and motivational factors prevent agents from adopting an alternative belief to the delusional one, and from recognising the epistemic shortcomings of their delusions.

Thursday 17 December 2015

Disturbed Consciousness

In this post, Rocco J. Gennaro (picture below) presents his forthcoming edited book titled 'Disturbed Consciousness: New Essays on Psychopathologies and Theories of Consciousness'.

My name is Rocco J. Gennaro. I am Professor of Philosophy and Philosophy Department Chair at the University of Southern Indiana in Evansville, Indiana, USA. My Ph.D. in philosophy is from Syracuse University in 1991. I moved from Indiana State University in Terre Haute (where I was for fourteen years) to the University of Southern Indiana in 2009.

My main area of specialty is philosophy of mind/cognitive science and consciousness, but I also have strong interests in metaphysics, ethics, and early modern history of philosophy. I have published seven books (as either sole author or editor) and numerous papers in these areas, often defending a version of the higher-order thought (HOT) theory of consciousness. I have also written on animal and infant consciousness, episodic memory, and have defended conceptualism.

Tuesday 15 December 2015

Conscious Control over Action

This post is by Joshua Shepherd (pictured above), a Wellcome Trust Research Fellow at the Oxford Uehiro Centre for Practical Ethics, and a Junior Research Fellow at Jesus College. Joshua's work concerns issues in the philosophy of mind, action, cognitive science, and practical ethics. In this post he discusses the role of conscious experience in the control of action, and summarises his recent paper 'Conscious Control over Action' published in Mind and Language. 

One question we might have concerns the kinds of causal contributions consciousness makes to action control. Another concerns a question regarding the relative importance of consciousness to action control. If consciousness is relatively unimportant, theorizing about ‘conscious control’ might be largely a waste of time. If consciousness is important, however, understanding the contributions of consciousness could be essential to a full understanding of the way we exercise control over our behaviour.

Although some philosophers and cognitive scientists have argued that consciousness is unimportant for action control, I argue in a recent paper that the opposite is probably true. The key is to see conscious processes as a part of a broader structure that enables action control, and to see where consciousness tends to fit into that structure. Consciousness certainly does not do everything for action control – but the things it does look to be important.

Here is an example of what I have in mind. Many have emphasized the fact that non-conscious visual processes appear to play an important role in structuring fine-grained elements of action control. Such processes contribute information to structures that enable features of action control like accurate shaping of grip size, or accurate tracking of action targets in the environment. Even if this is true, however, I argue that extended processes of action control often require not just fine-grained elements such as scaling one’s grip or tracking an action target.

Action control requires the maintenance and updating of action plans, the preparation of contingency plans in response to anticipated difficulties, and the flexible management of capacities such as attention. Action control requires, that is, not just implementational capacities of the sort non-conscious vision may support, but executive capacities. And it looks like consciousness plays important roles for the deployment of these executive capacities.

Monday 14 December 2015

PERFECT 2016: False but Useful Beliefs

Project PERFECT is very proud to announce its first workshop, on False but Useful Beliefs, to be held in London on 4th and 5th of February 2016. The workshop will take place at Regent's Conferences and Event in Regent's Park (see picture below). The idea of the workshop is to explore a variety of beliefs and belief-like states that are epistemically faulty (either false or badly supported by evidence) but that also play a useful function for the agent, either biologically, psychologically, pragmatically, epistemically, or in some other way.

The workshop features three types of talks.

1. Talks by invited speakers who are leading experts in the area. 

Anandi Hattiangadi from Stockholm University will talk about radical interpretation and implicit cognition, Neil Van Leeuwen from Georgia State University will discuss agent-like stimuli in religious practice, and David Papineau from King's College London and CUNY will ask whether functional falsity refutes teleosemantics.

2. Talks by an excellent mix of early- and mid-career philosophers from all over the world, selected via a call-for-papers earlier in 2015

Jesse Summers from Duke University will talk about the benefits of rationalisation, Lubomira Radoilska from the University of Kent will ask whether false beliefs are conducive to agential success in a non-accidental way, Patrizia Pedrini from the University of Florence will discuss self-deception, David Kovacs  from Cornell University will talk about false but useful beliefs about ordinary objects, and Kate Nolfi from the University of Vermont will argue that there are epistemically faultless false beliefs.

3. Talks by project team members reporting on their progress with PERFECT. 

Ema Sullivan-Bissett (post-doc on PERFECT, working on belief) will talk about false but useful beliefs about epistemic normativity, and I (Lisa Bortolotti, PI on PERFECT) will ask whether positive illusions are epistemically innocent.

If you want to attend the conference, please register here by 15th January. Registration is heavily subsidised by the project, and at £30 (£20 for students and unemployed) it just covers lunch and refreshments over the two days of the workshop.

Hope to see many of you there!

Thursday 10 December 2015

MAP@Leeds Implicit Bias Workshop

On 15th-16th October 2015 the University of Leeds Minorities and Philosophy chapter hosted the MAP@Leeds Implicit Bias Conference. The conference included a large number of high quality talks covering a wide-range of issues relating to implicit bias. This report focuses on three of these talks.  

In her talk, “What do we want from a model of implicit bias?”, Jules Holroyd (pictured above) noted that competing models of implicit cognition have emerged from different sources, with different priorities and conceptual frameworks. She set out a framework for assessing these competing models. She set out some desiderata for a model of implicit cognition, set out some test cases, and considered some recent models of implicit bias in light of the desiderata and cases. She showed that models provided by Levy, Schwitzgebel, Mandelbaum, Gendler and Machery each fail to meet the desiderata. In developing this argument she provided a clear articulation of what an account of implicit cognition should do, bringing to light important cases that have been ignored in much discussion of implicit cognition and implicit bias.

In his talk, Ian James Kidd (pictured above) asked “Can We Retain Confidence in Philosophy in the Light of Implicit Bias?” He emphasised a number of different forms of confidence that might be threatened by knowledge of implicit bias: confidence in oneself as a philosopher; confidence in other philosophers, or philosophers as a collective; and confidence in philosophy’s agenda, heritage and future. He argued that the aggressive adversality in philosophy, which is slave to psychosocial biases, has the potential to be particularly damaging to each of these forms of confidence. Moreover, discoveries about implicit bias present a threat to the authority of reason and the idea that philosophers identify solid foundations for knowledge through the philosophical enterprise. However, Kidd argued that discoveries about implicit bias also have a positive impact: they highlight the advantages of an ancient vision of philosophy’s nature and purpose, according to which it involves identifying objects that prevent flourishing, and then identifying and implementing ameliorative strategies that facilitate flourishing. Philosophy, on this view, can be seen as improving understanding in order to facilitate the transformation of how people live. With regards to implicit bias, philosophy allows understanding of implicit bias and facilitates transformation of the ways in which people live that are related to implicit bias.

In their talk on “The Pragmatics of Inclusivity” Katharine Jenkins and Jennifer Saul (pictured above) focused on ways to improve philosophy teaching.  They argued that it is important to diversify syllabi, so that more of the ideas of members of minority groups are represented. However, they argued that additional action is required to successfully combat the negative effects of phenomena such as implicit bias and stereotype threat. They argued that it is necessary to emphasise the social group membership of members of minority groups where their work is taught, e.g. when teaching about the work of a Black female one might highlight her gender and racial group membership. They recognised, however, that statements such as “here is Joan Smith, she is Black” could produce unintended conservational implicature, suggesting to students that there is something negative about being female or Black. They argued that it is necessary to cancel this implicature by explicitly stating why it is that the social group membership of the individual is being emphasised, i.e. to combat the negative effects of biases. 

Congratulations to the organisers of this conference for bringing together so many excellent philosophers working at the cutting edge of this extremely interesting topic.  

Tuesday 8 December 2015

Decision-Making Capacity Incapacitated

This post is by André Martens, pictured above. Here André summarises his recent paper ‘Paternalism in Psychiatry: Anorexia Nervosa, Decision-Making Capacity, and Compulsory Treatment’, appearing in New Perspectives on Paternalism and Health Care edited by Thomas Schramme.

Currently, decision-making capacity (DMC) is intensively discussed in disciplines such as bioethics, philosophy of psychiatry, and psychology. Some authors regard it as (mental) competence. But what exactly is DMC? What are the mental preconditions of making genuine decisions? And what role does DMC play in ethics, especially regarding the normative status of treatment decisions of psychiatric patients with reduced, or even completely lacking DMC? In my paper I try to answer these questions.

Initially, I looked at the so-called traditional account of DMC, which is associated with the work of Paul S. Appelbaum and Thomas Grisso, among others. Here, DMC is formulated in terms of certain abilities, each being a necessary condition for the ascription of DMC. These abilities are:
  1. Understanding (of the factual information relevant for the focal decision),
  2. Appreciation (of the consequences and significance of the focal decision for one’s own life),
  3. Reasoning (being able to engage in reasoning processes such as weighing and comparing alternatives),
  4. Communication of Choice (DMC requires the ability to communicate one’s own choice).
This account focuses basically on cognitive abilities. And this is exactly the reason why it cannot account for a lack of DMC in some psychiatric disorders such as anorexia nervosa. Patients afflicted by severe forms of this potentially life-threatening eating disorder regularly achieve average or even above-average results in tests that operationalize the traditional account of DMC (e.g. the MacCAT-T). Nevertheless, and this is admittedly an intuitionist thesis, at least some instances of anorexic decision making appear to be ‘flawed’, for example, the refusal of life-saving treatments in terminal anorexia nervosa. Therefore, abilities not captured by the traditional account of DMC seem to be relevant as well. But which ones?

Inspired by the work of Jacinta Tan and Louis Charland, I defend the following thesis:

Inclusion thesis: Any full account of DMC must include at least one (explicit) evaluative or emotional element.

Monday 7 December 2015

Questioning Optimism

I'm Adam Harris and I'm an experimental psychologist from University College London.

I am perhaps an unusual contributor to the Imperfect Cognitions blog as I have argued that cognitions might seem imperfect because of imperfections in prevalent methodologies, predominantly arising from a failure to appreciate the importance of understanding the appropriate normative basis of a task. Specifically, my work has suggested that the assumed ubiquity of optimism across our species is based on questionable evidence.

A prominent example of this work is presented in a paper I wrote with Ulrike Hahn (published in the Psychological Review) in which we demonstrated, through simulation, that rational agents could be labelled as optimistic on the prevalent, comparative method of testing unrealistic optimism. On this method, participants respond to the question "Compared with the average student of your age and sex, how likely are you to..." where future life events are inserted for the ellipsis.

Responses are typically provided on a -3 (much less likely than the average) to +3 (much more likely) scale, where a response of zero represents 'about the same as average'. The logic of the test rests upon the recognition that if participants are accurately reporting their chances, then the average of their responses should be the average. Consequently, any deviation from zero is taken as indicative of a systematic bias. The oft-observed result that average responses on this scale to negative events are significantly negative is taken as evidence that, on average, the members of the group underestimate their relative chances. Because we do not wish to experience negative events, such a result is taken as evidence of optimism.

In Harris and Hahn (2011), we demonstrated that 3 statistical artifacts could generate the oft-observed pattern of results from rational Bayesian agents who are, by definition, unbiased. Such a result raises questions over the interpretation of the same pattern of results observed in human participants. There is no longer any evidence for bias on these tests if the pattern of results are consistent with those of rational agents. Essentially, these tests fail the major pre-requisite for a satisfactory test of bias: Unbiased agents appear biased!  In ongoing (as yet unpublished) research, we have failed to identify any evidence for optimism after controlling for these confounding artifacts (my website will be updated when these results are published).

In addition to specifically raising concerns over the understanding of comparative unrealistic optimism, this work highlights, generally, the importance of understanding what participants’ responses represent and the appropriate normative standard for those responses. In unrealistic optimism research, participants’ responses represent their understanding of their own risk and the average person’s risk. Normatively, their own risk includes their estimate of the base rate as well as any individuating information they possess. This insight is a critical consideration when evaluating conclusions from any measure designed to assess bias in risk estimates about real-world events (see also, Harris et al., 2013)

Fortunately, there is also an initial, easy, check for an optimism bias that cannot be accounted for on statistical grounds. The implications of any statistical account in terms of researching optimism are opposite for events of opposite valence. If, for example, a statistical account predicts lower responses for negative events, it will also predict lower responses for comparable positive events. Because of the reversed desirability of positive and negative events, however, the same direction of effect that constitutes optimism for one valence would constitute pessimism for the opposing valence.

Thus, the inclusion of both positive and negative events can serve as a first-stage litmus test to identify a possible confounding influence of statistical artifacts. I therefore recommend that researchers routinely include both positive and negative events in their tests of optimism. In my own work, all such tests to date have observed the same direction of effect in each valence. This constitutes seeming optimism in one valence and pessimism in the other, thus failing to provide the conclusive evidence required for optimism.

Thursday 3 December 2015

The 17th International Conference on Philosophy, Psychiatry and Psychology - INPP 2015

The 17th International Conference on Philosophy, Psychiatry and Psychology – International Network for Philosophy and Psychiatry INPP 2015 – with the topic ‘Why do humans become mentally ill? Anthropological, biological and cultural vulnerabilities of mental illness' - was held in Frutillar, Chile, on October 29th, 30th and 31th, 2015. The Conference has been organised by the Centro de Estudios de Fenomenología y Psiquiatría, Universidad Diego Portales, Santiago, Chile, in coordination with the International Network for Philosophy and Psychiatry, INPP, to promote and share cross-disciplinary research from the field of Philosophy and Mental Health. 

All the lectures and seminars were housed in Teatro del Lago (picture above) located on the lake in Chilean Patagonia, with an inspiring natural setting and stunning architecture. The programme consisted of 23 plenary conferences, 54 oral presentations, 6 panel discussions, and more than 30 posters of researchers coming from all over the world.

Here I summarize only a small sample of talks from this super interesting event.

Tuesday 1 December 2015

Bayesian Accounts and Black Swans

In this post Ryan McKay, summarises his recent paper 'Bayesian Accounts and Black Swans: Questioning the Erotetic Theory of Delusional Thinking'.

Matthew Parrott and Philipp Koralus (hereafter P&K) offer a fresh take on 'imperfect cognitions'. In their recent post they outline how their 'erotetic theory' can account for certain instances of fallible human reasoning. They illustrate this with an example about a fridge containing either beer or wine and cheese (I confess that I fell for the fallacy here; I presume my critical faculties were disarmed by my stomach).

My purpose in this brief post is not to contest their analysis of such examples, but to summarise my evaluation of their erotetic approach to delusional thinking, raising my own questions about their theory in the process.

The Core Claim

P&K’s core claim is that deluded individuals are less inquisitive than healthy individuals; in particular, deluded individuals are selectively deficient in raising endogenous questions, while having no problem raising or answering exogenous questions (which include 'default questions in response to external stimuli' as well as questions posed by others). However, without any rigorous way of distinguishing endogenous questions from exogenous questions, the hypothesis that deluded individuals are impaired in raising the former seems hard to falsify – any question that a deluded individual shows themselves capable of asking could be rationalised as 'externally stimulated', and thereby exogenous, after the fact. Meanwhile, the claim that deluded individuals 'would have no problem taking on board and answering questions that are put to [them] by someone else' (Parrott and Koralus 2015: 400) is already contradicted by available evidence, as some deluded individuals are completely impervious to external questioning (e.g., see Breen, Caine, and Coltheart 2002).
  • Q1) Given that P&K suggest that some questions a person asks are 'externally stimulated', and thereby exogenous, how can we reliably distinguish endogenous questions from exogenous questions?
  • Q2) If deluded individuals are selectively deficient in raising their own questions, why are they unable to fully utilize and retain questions that others raise?

Monday 30 November 2015

An Illness of Thought

Today's post is by Jonny Ward who tweets as the Anxious Fireman and has also blogged for the Stigma Fighters. 

Hello to all reading this. My name is Jonny Ward. I’m 31, male, straight, white and a firefighter with Greater Manchester Fire and Rescue Service. I can grout the shower, chop logs, I have an anxiety disorder, I have travelled a lot of the world and I enjoy mountain biking.

If you had to ask me about any one of my interests or traits from that list, which would it be? And why?

If this had been someone else and it was three years ago, I would have picked the anxiety disorder part. Because back then I was just as naive to mental ill health as most men. I would consider it unusual, a strange thing to say or admit too.

But over the last two years my mind-set has changed tremendously. I suffered with anxiety and panic attacks. It happened after a long period of stress. I was doing too much work, too many projects, trying to please too many people. All of which meant I burnt out, blacked out in a restaurant and woke up anxious. I started having panic attacks and in classic panic attack form became afraid of having more panic attacks around people and looking stupid.

It’s taken me a while to really understand what made me ill. It wasn’t the situation, the work load, the “stresses”. These were all purely material and circumstantial. It was my own thoughts. My cognition if you will.

Every thought I had was becoming negative, was I good enough, was I pleasing others, was I achieving enough! Was I! Am I! What if! What if! WHAT IF!!!

A negative thought affects your health, as any thought in the mind does. If it’s positive you feel slightly better, negative, slightly worse. Just like drinking alcohol, a little doesn’t kill your liver, but drown your liver in alcohol every day and eventually it will become very sick.

This is what I was doing. I was becoming a negative thoughtaholic. My self-belief, confidence all became undermined and I made myself ill. I managed to turn things around but it was difficult. I had CBT and counselling and medication.

I have recently buried a second friend (who was also fireman) who completed suicide. He had been very poorly in his head for a while and, in my opinion (I have no medical knowledge) had been thinking very negatively about who he was, his relationships, his standing in life, how he was seen for a very long time. His thoughts were very poorly. What really killed him though I think is his fear of getting help and “coming out”.

His ego, or pride would not allow it, his thought process would not allow him to accept he was human and not a ‘man’.

When I told my watch at the station, friends and family I was suffering and vulnerable, I got closer to them than I ever thought possible. I wish my friend had taken that step.

My love to you all

Thursday 26 November 2015

‘Pathologizing Mind and Body’ Workshop in Leuven

What is the relation between mental disorders and physical disorders? Is it possible to find a biological basis for mental disorders? What purpose would a reduction of mental disorders to physical disorders serve? These are some of the questions addressed at the workshop ‘Pathologizing Mind and Body’ organized by Jonathan Sholl and Marcus Eronen. Philosophers, psychologists and psychiatrists approached this topic from different angles and highlighted problems and recent developments.

In the first talk, Ignaas Devish stressed the distinction between suffering and pain, claiming that suffering is not a phenomenon that is amenable to diagnostic operationalisation and medicalization. He argued that because of this, victims of traumatic events should be offered an ethical debriefing in addition to psychological debriefing, which allows them focus on their suffering and the existential problems they face from a non-medical perspective.

Arantza Exteberria talked about ‘Interactive Bioloops and Pathology’. She pointed out  that biolooping is a common phenomenon, with social levels thoroughly influencing biological levels of functioning. Because of this, there is no clear mental/physical distinction when it comes to pathologies. Furthermore, the complex and dynamic nature of individual-organism relations suggests that personalized medicine will be a fruitful way to proceed. 

Denny Borsboom introduced the audience to the network approach to mental disorders. Rather than searching for a unified cause that underlies a variety of pathological symptoms in a certain disorder, this model focuses on the network of pathological symptoms as a mutually reinforcing structure. On this model, high connectivity between the different network nodes or symptoms is the hallmark of a disordered mind. Denny concluded that according to this understanding of mental disorder, a disorder as an entity is more analogous to a flock of birds than to a unitary thing.

In his presentation ‘Fat or Obese – What Difference Does it Make?’ Andreas de Block considered objections to the medicalization of fatness by proponents of fat studies. He criticized the link fat studies scholars often make between pathologizing and moralizing and discussed studies which show some potential benefits of pathologizing obesity. Andreas also explored the possible effects of labelling obesity as a mental disorder analogous to addiction.

Tuesday 24 November 2015

The Erotetic Theory of Delusional Thinking

This post is by Matthew Parrott and Philipp Koralus. Here they summarise their recent paper ‘The Erotetic Theory of Delusional Thinking’, published in Cognitive Neuropsychiatry. 

Matthew Parrott

In this paper, we appeal to the recently developed erotetic theory of reasoning in order to explain three patterns of anomalous reasoning associated with delusion: mistaking a loved-one for an impostor (as in the Capgras delusion), the well-documented tendency to ‘jump to conclusions’, and surprising improvements in a certain reasoning task involving conditionals (Mellet et. al. 2006).

According to the erotetic theory, the aim of human reasoning is to answer questions as directly as possible (for further discussion and for a formal account of the theory, see Koralus and Mascarenhas 2013). More precisely, according to the erotetic theory, reasoning proceeds by treating an initial premise or set of premises as a question and then treating subsequent information as a maximally strong answer to that question. Here is an informal illustration:

Suppose you are given a premise: there is either beer in the fridge, or there is wine and cheese in the fridge.

Informally, the erotetic theory holds that this premise will be cognitively processed by reasoners as the following question, or issue, that needs to be addressed: Am I in a beer-in-the-fridge situation or in a wine-and-cheese-in-the-fridge situation?

Now suppose the next piece of information you get is that there is cheese in the fridge. If you process that information as a maximally strong answer, resolving the issue you were trying to address, then you will conclude that you are in a wine-and-cheese-in-the-fridge situation

Of course, it would be a fallacy to draw this conclusion based on the information available. Interestingly, it is a form of reasoning that most people are naturally disposed toward (Walsh and Johnson-Laird 2004). The erotetic theory captures this pattern of tempting fallacies, along with various others documented in the experimental literature, and predicts new ones. Crucially, according to the erotetic theory, what allows human reasoners to avoid fallacies is to raise enough further questions as the reasoning process progresses. What characteristically leads us astray when we succumb to fallacies is a lack of inquisitiveness (for details see Koralus and Mascarenhas 2013).

We were curious whether, with the help of the erotetic theory, we could make sense of seemingly outlandish thought patterns associated with delusions as extreme cases of tendencies that are present in all of us. The idea was to explore a model of delusional thinking as being like ordinary thinking except lacking inquisitiveness of a crucial sort.

According to the erotetic theory, delusional thinking is conceptualised in terms of the way individuals ask questions or in terms of how they go about answering those questions. In the paper, we propose that relevant patients entertain roughly the same default questions that most people strongly associate with various external stimuli, but that they either envisage fewer alternative possible answers to these questions or raise fewer follow-up questions as they proceed to try to answer them. This chiefly has a negative effect on the quality of conclusions drawn, but we argue that it can also yield some surprising performance advantages.

In the paper, we describe how lack of inquisitiveness can make sense of various thought patterns associated with delusion. We hope this brief introduction sparks interest in renewing efforts to understand reasoning, both ordinary and delusional, more systematically than we do at present.

Thursday 19 November 2015

Legal Fictions in Theory and Practice

In this post Maksymilian Del Mar (in the picture above) presents the recent book Legal Fictions in Theory and Practice (Springer 2015), co-edited with William Twining.

Treating Menorca as if it is a suburb of London, or a ship as if it was a person, or pretending that persons who form contracts are made by rational agents with knowledge of the commitments they are making, or that states who take over other states find a land empty of life (as in the doctrine of terra nullius) – or, positing the existence of consent, malice, notice, fraud, intention, or causation when evidence clearly points to the opposite conclusion (or to no conclusion at all)…

All these are example of legal fictions. They fly in the face of reality. And, in the literature on theories of law and legal reasoning, they are not very popular. In this new collection – Legal Fictions in Theory and Practice (Springer, 2015, co-edited by William Twining and Maks Del Mar) – 18 chapters explore another view: that not only are fictions pervasive in legal practice (and in very different legal traditions), they are also considerably more valuable cognitively than we have hitherto appreciated.

Tuesday 17 November 2015

PERFECT Year Two: Michael Larkin

Today's post is by Michael Larkin, Senior Lecturer in Psychology at the University of Birmingham and co-investigator in project PERFECT. Michael talks about his research interests for this second year of the project, and focuses on shared experience and parity of esteem.

My colleague Lisa Bortolotti has written recently about Project PERFECT, and the importance of understanding those aspects of human cognition which are common to both those who seek support from mental health services and those who do not. Lisa’s conceptual work illuminates some of the ways in which, at times, we all may hold beliefs which are difficult for others to share, or act upon reasoning which is difficult for others to understand.

Yesterday, I spent a fascinating morning with two clinical psychologists and a group of trainee clinical psychologists, exploring some of the differences and commonalities between ‘knowledge’ and ‘belief’ in our research and practice. We discussed how the task for the clinical psychologist often involves the gradual building of a bridge – a collaborative process - to span the gap between one person’s view of the world, and another’s. The psychologist is able to draw upon a wide field of knowledge (theory and evidence about the kinds of difficulties which people experience, and the kinds of factors which tend to cause and maintain them, for example), but must work with the service-user to understand which of these elements might be relevant and helpful to understanding their particular circumstances and context. Thus, formal knowledge and informal belief (about experience, and its meaning, for example) are combined. From this shared understanding, a formulation can be developed, which provides the basis for any therapeutic work that the psychologist and service-user might then decide to pursue together.

Lisa’s article about Project PERFECT suggests that once we have mutual understanding, we can see the commonalities in human experience, and we become able to see the difference between ‘someone who uses mental health services’ and ‘someone who does not who use mental services,’ in a new way. We see it then as simply a difference in the intensity or persistence of a particular experience. 

For example, I saw the surreal and rather disturbing film The Lobster this week. The feeling of anxiety which it produced continued to perfuse my experience throughout the next day. Anxiety isn’t an experience which generally causes me too much trouble (I’m probably more prone to low mood), but when – after a good night’s sleep – the feeling had lifted, I did have cause to reflect on what it would have been like to cope with that feeling for longer, or for its effect to have been more pronounced. In circumstances where I had been required to cope with other stresses, and where I did not have recourse to that good night’s sleep, might my reaction have been different?

The intensity or persistence of our distress is often shaped by the context in which we find ourselves. This is generally a more helpful way of thinking about psychological wellbeing than considering the difference between ‘someone who uses mental health services’ and ‘someone who does not,’ to be a difference between two ‘kinds’ of people – something which is generally underscored by the complex findings of genomic research. The importance of PERFECT’s message for anti-stigmatisation therfore is that there is no ‘them’ and ‘us’.

Monday 16 November 2015

Workshop on Unrealistic Optimism

On February 25th and 26th we will be hosting an interdisciplinary workshop on optimism at Senate House, London.

The first day will be dedicated to the question what unrealistic optimism is and how it is caused. Why is it that we see such a wide-spread tendency to be unrealistically optimistic abour our own future? Are the primary factors motivational or cognitive? What processes allow us to think ‘it won’t happen to me’?

We will be hearing about the brain processes underlying optimistic belief formation patterns and from Tali Sharot; and Bojana Kuzmanovic will be speaking about evidence that optimistically biased belief updating recruits brain areas associated with motivational processes. I will be considering the question whether the wide-spread tendency to be unrealistically optimistic about one’s own future can be explained by the fact that these belief patterns were adaptive in the past. Constantine Sedikides will be discussing unrealistically optimistic beliefs as one type of motivationally driven self-enhancing belief.

On the second day, we turn to the question of what the effects of optimistically biased cognition are. Are they beneficial or do they increase the risk of bad things happening to us because they prevent us from taking precautions? James Shepperd will be reviewing findings from existing research on these questions and suggest explanations for inconsistencies in these findings. Miriam McCormick will be exploring the concept of rational hope and put forward conditions for judging hope as appropriate or inappropriate. Fernando Blanco’s talk will focus on potential health risks of unrealistic optimism and causal illusions and ways of reducing these. Finally, Lisa will be talking about engaged agency as a positive outcome of some cases of unrealistic optimism.

Thursday 12 November 2015

CRASSH Moral Psychology Conference

On 9th October CRASSH organised a Moral Psychology Interdisciplinary Conference in Cambridge, featuring keynote talks, panel discussions and discussion groups. This is a brief report on the sessions in which I participated.

The first session was a panel symposium with Josh Greene (Harvard) and Molly Crockett (Oxford) on the future of moral psychology chaired by Richard Holton. Josh started discussing the assumption in popular culture and the media that there is a “moral faculty” where all moral beliefs and decisions can be found, somewhere in the brain. But this is a myth according to him: morality is like a vehicle. “Vehicle” is a category, but what makes something a vehicle is not its internal mechanics. What makes something a vehicle is its function. Same with morality. If morality is a thing, people who study morality are interested in its function. The processes, operating principles and behavioural patterns involved in moral thinking are not distinct to morality.

Molly made the point that no study so far has examined in the same sample social and non-social computation, moral and non-moral decision-making in the scanner, so there is still room to argue that there is something distinctive about morality. For Josh, morality is about cooperation and thus it is predominantly social. There may be a distinction between social and non-social pathways, but not about moral and non-moral over and beyond the social.

But are there discreet emotions? Probably not, emotions as we know them are probably just a blend of different elements (e.g., guilt is a mixture of arousal and fear). Molly argued that the peculiarities of guilt (how we feel about it, in what circumstances it emerges, etc.) partially depend on our way of living, and would manifest differently in a radically different culture. For both Molly and Josh, morality is about warmth and competence, where warmth is more important than competence.

The discussion then moved to how we learn. There are different systems for learning that are psychologically dissociable.
  • Goal-directed/model-based system: decisions and actions are based on a model of the world (controlled); more reason-based and cold. 
  • Model-free system: decisions and actions themselves take on positive or negative value (controlled); more emotional as it relies on affective weighing. 
  • Pavlovian system: approaching rewards and avoiding punishment (automated). 
Consequentialist thinking tends to be more commonly adopted as part of the goal-directed system, and deontological thinking tends to be favoured by the the model-free system.

In the next session, Jill Craigie (King’s College London) and I talked about responsibility and control, in two different contexts. Jill (pictured below) focused on mental capacity and legal capacity in addiction, and in particular alcohol dependance. I focused on responsibility over actions motivated by delusional beliefs in people affected by schizophrenia, and responsibility for adopting delusional beliefs.

Talking about mental capacity and its assessment, Jill asked whether cognitive tests are sufficient to judge whether someone can be guided by reason. Are such tests contrued in a way that is sufficiently broad to assess all the relevant capacities? Addiction does not fall under the Mental Health Act but it falls under the Mental Capacity Act. Among the capacities included in the notion of mental capacity, there is the capacity to make one’s own treatment decisions. Criteria for mental capacity are the ability to understand, retain and weigh up relevant information, and make a decision on the basis of that information.

Jill reported a case of Ms X who was suffering from alcohol dependance and anorexia. She had a number of involuntary hospital admissions. The court considered her capacity both in relation to alcohol dependance and anorexia. They decided that she lacked capacity when it came to her anorexia (e.g., she lacked the capacity to weigh information about the consequences of treatment, and the compulsion not to eat was too strong for her to ignore), but she had capacity in relation to her alcohol dependance (e.g., she was choosing when and what to drink, and her drinking was responsive to events).

Although alcohol dependence could fall under the Mental Capacity Act, in fact it has been judged not to compromise mental capacity in England and Wales (but interestingly it has in US and Australia). It is not clear what justifies treating anorexia and alcohol dependence differently in some of these cases. Anorexia is seen as a habit one cannot break and implying an inability to weigh consequences of not eating. But alcohol dependence is not seen in the same way.

I discussed some case studies showing that the presence of psychotic symptoms in general and delusions in particular should not rule out by default that a person is responsible for behaviour that is motivated by such symptoms. I also referred to the high-profile case of Breivik.

After the lunch break, I attended the discussion group on psychopathy and moral motivation, led by Marion Godman (University of Helsinki). Marion (pictured below) introduced one of the main issues surrounding psychopathy, the high prevalence of psychopathic traits in people who are incarcerated for serious, violent crimes. She discussed the empirical finding according to which psychopathy is associated with spontaneous as opposed to reactive aggression.

From the data, she moved on to the analysis of different interpretations, and talked about a recent debate about whether the behaviour of people of psychopathy should be regarded as ‘mad’ or ‘bad’. Marion found that both ways of categorising psychopathy are characterised by unattractive consequences. An intense discussion followed, where some interesting parallales were made between psychopathy and autism (the former being associated with a compromised capacity to be motivated by punishment, and the latter with a compromised capacity to be motivated by rewards).

The conference was very stimulating and I had the pleasure to meet many young researchers in philosophy, psychology, and neuroscience, testifying to the genuinely interdisciplinary nature of the event.

Tuesday 10 November 2015

Is your brain wired for science, or for bunk?

This post is by Maarten Boudry (picture above), Research Fellow in the Department of Philosophy and Moral Sciences at Ghent University. Here Maarten writes about the inspiration for his recent paper, co-authored with Stefaan Blancke and Massimo Pigliucci, 'What Makes Weird Beliefs Thrive? The Epidemiology of Pseudoscience', published in Philosophical Psychology. 

Science does not just explain the way the universe is; it also explains why people continue to believe the universe is different than it is. in other words, science is now trying to explain its own failure in persuading the population at large of its truth claims. In Why Religion is Natural and Science is Not, philosopher Robert McCauley offers ample demonstrations of the truth of his book title. Many scientific theories run roughshod over our deepest intuitions. Lewis Wolpert even remarked that 'I would almost contend that if something fits with common sense it almost certainly isn't science.’ It is not so much that the universe is inimical to our deepest intuitions, it is that it does not care a whit about them (it is nothing personal, strictly business). And it gets worse as we go along. Newton’s principle of inertia was already hard to get your head around (uniform motion continuing indefinitely?), but think about the curvature of space-time in general relativity, or the bizarre phenomena of quantum mechanics, which baffle even the scientists who spend a lifetime thinking about them. Science does not have much going for it in the way of intuitive appeal.

Bearing all that in mind, it may seem remarkable, not that so many people refuse to accept the scientific worldview, but that so many have embraced it at all. Of course, science has one thing in its favour: it works. Every time your GPS device tells you where you are, or you heat up your soup by bombarding it with invisible waves, or you blindly entrust your fate in the hands of an able surgeon, you are relying on the achievements of science. Science is culturally successful despite the fact that it clashes with deeply engrained intuitions. By and large, people accept the epistemic authority of science—sometimes begrudgingly—because they admire its technological fruits and because deep down they know it is reasonable to defer to the expertise of more knowledgeable people. Without its technological prowess, which ultimately derives from the facts that it tracks truth, the scientific worldview would wither away. No system of beliefs could succeed in convincing so many people of so many bizarre and counterintuitive things, unless the truth was on its side, at the least most of the time.

We can see that if we compare science with some of its contenders: religion, superstition, ideology, and in particular pseudoscience—belief systems that actively mimic the superficial trappings of science, trying to piggyback on its cultural prestige. By definition, pseudoscience does not have truth on its side (except by a sheer stroke of luck), or else we would just call it ‘science’. Because they defy reality, pseudosciences can boast of no genuine technological success. The army does not hire psychics (or so one hopes), homeopathy only has the placebo effect to count on, and creationists are marginalized in the scientific community, despite their persistent campaign for recognition.

But how do pseudoscience and other weird belief systems sustain themselves? They profit exactly from that which is lacking in science: intuitive appeal. Almost all pseudosciences tap into the cognitive biases, intuitions and heuristics of the human mind, courtesy of evolution by natural selection. Intuitive appeal makes up for lack of truth value. Pseudosciences have even developed ‘strategies’ to cope with the threat of adverse evidence, and to withstand critical scrutiny. In my dissertation Here Be Dragons and in a series of papers (here and here) with Johan Braeckman, I refer to these as ‘immunizing strategies’ and ‘epistemic defense mechanisms’. In our recent paper we have further pursued this analysis and compared the cultural dynamics of science and pseudoscience, developing what Dan Sperber called an ‘epidemiology of representations’. In this new work, we show how science achieves cultural stability, despite the fact that it flies in the face of pretty much every human intuition, and how ‘weird’ beliefs can thrive, under the false pretense of being scientific. Pseudoscience does not have truth on its side, but it does tap into our innate intuitions and biases, and is protected by its own in-built survival kit.