Tuesday 27 February 2018

Dissociative Identity Disorder, Ambivalence and Responsibility

Today's post is by Michelle Maiese, Associate Professor of Philosophy at Emmanuel College in Boston, Massachusetts. Her research focuses on topics in philosophy of mind, philosophy of psychiatry, and moral psychology.

There has been debate among philosophers about how to address issues of responsibility in cases where subjects suffer from dissociative identity disorder (DID). If one personality commits a wrongful act of which another was unaware, should we regard this individual as responsible for her actions? If we regard DID as a case in which multiple persons inhabit a single body, it may seem natural to conclude that each alter is a separate agent and that one alter is not responsible for the actions of another. However, in “Dissociative Identity Disorder, Ambivalence, and Responsibility", I argue that even once we acknowledge that a subject with DID is a single person, there are still serious reasons to question the extent to which she is responsible for her actions.

This is because a subject suffering from DID often will find it difficult to exercise autonomous agency. This individual cannot control the slide into one or another alter-state, and once she is in that state, she will lack awareness of many considerations favoring a particular course of action. In addition, due to disturbances in memory and self-awareness, the subject with DID is either incapable of remembering prior decisions, or incapable of being properly motivated by them. Even if a subject decides on a course of action, other desires and priorities may ‘take over’ once she switches to a different alter-personality. Also, there may be so much psychological fragmentation and memory loss that it becomes difficult for her to foresee what she will do or assess the long-term consequences of her actions.

I argue that these impairments in agency are the direct result of extreme ambivalence: young children who develop DID experience extreme inner conflict regarding emotional needs to which they feel deeply attached. Suppose that Sue hates her mother and wants her to die, but also loves her mother and wants to have a close relationship with her. Rationality demands that Sue alter her desires appropriately. However, suppose that Sue feels so strongly attached to both of these conflicting desires that there is no way to achieve a well-integrated, unified perspective. 

What allows her to avoid crumbling under the pressure of inner contradictions is the belief that her conflicting mental states belong to separate selves. That is, she both accepts certain desires and tries to rid herself of them, and those desires that seem like ‘unacceptable intruders’ are handed off to an alter-personality. This ‘handing-off’ of desires and actions thus can be understood as Sue’s attempt to mask contradictions and manage inner conflict. Although extreme dissociation may intensify emotional disturbance over the long-term, it may be in Sue’s short-term interests in the sense that it allows her to compartmentalize painful feelings and memories.

Such compartmentalization can be paralyzing or lead to other disruptions of agency. It is notable that “competing” alter-personalities often vie for control of the body. For example, alters sometimes intervene in the lives of other alters by destroying their school work, spending their money, or hiding their things. This lack of a coherent will also is evidenced via the phenomenon of waverings, when one alter attempts to do something that is directly at odds with the goals and intentions of another. Such struggles for control should be understood as the outward signs of inner conflict. Because the subject with DID suffers from persistent and pervasive ambivalence, she does not form an integrated will and is largely incapable of restructuring it. Since her concerns and attitudes are not integrated, she is unable to arrive at an ‘all-things-considered’ judgment about what it would be best to do

If it is true that subjects with DID suffer from extreme ambivalence of the sort I describe, then it would be a mistake to regard them as responsible for their wrongful actions in the same way that we regard ordinary adults as responsible. However, although autonomy and responsibility are eroded in such cases, they do not disappear altogether. If such a subject behaves wrongfully, there certainly is ‘part’ of her that wanted to do so, and thus, the action is attributable to her. Furthermore, even if she cannot exercise self-determination, it is important to acknowledge that her overall capacity for autonomous agency remains intact. This means there may be steps she can and should take to attempt to restore her autonomy or prevent any immoral actions from occurring.

Thursday 22 February 2018

On Folk Epistemology

Mikkel Gerken is associate professor at the University of Southern Denmark. In this post he writes about his new book ‘On Folk Epistemology. How we think and talk about knowledge’.

A central claim of my book, On Folk Epistemology. How we think and talk about knowledge, is that some folk epistemological patterns of knowledge ascriptions are best explained by cognitive biases. I argue that this approach to folk epistemology yields diagnoses of some hard puzzles of contemporary epistemology. So, On Folk Epistemology seeks to contribute to some prominent debates in contemporary epistemology. For example, I criticize contextualism, pragmatic encroachment, knowledge-first epistemology etc. If you want to check it out, there is an introduction and overview here.

In this blog post, however, I will emphasize why the study of folk epistemology is an important task. In a nutshell, it is because folk epistemology is extremely consequential. Consider, for example, the roles of knowledge ascriptions in our social interactions. We acquire the ability to think and talk about knowledge early in life. Moreover, mental and linguistic ascriptions and denials of knowledge remain extremely prominent in adulthood. Indeed, linguistic knowledge ascriptions are arguably among the most important speech acts that we engage in on a daily basis.

To ascribe knowledge to oneself or to someone else is a powerful speech act that gives the proposition said to be known a special status. Often it indicates that we are in a position to act on the proposition. Moreover, the subject to whom knowledge is ascribed is often given a stamp of social approval or disapproval. Just consider phrases such as “she is in the know” or “he doesn’t know what he is talking about.” Consequently, knowledge ascriptions are central to many of the social scripts that govern social life. So, if our knowledge ascriptions and intuitions about them are biased, we’d want to understand how and why. After all, we do not want to make our decisions about whom to trust and how to act based on biased judgments.

Understanding the biases of our folk epistemology is all the more urgent given that they may lead to social injustices. This may be the case if biases reflect stereotypes that pertain to gender, race or class. Folk epistemological biases are particularly relevant to distinctively epistemic injustices. While epistemic injustices may be caused by general “identity prejudices”, folk epistemological biases are especially relevant.

After all, they may lead us to mistakenly regard someone who in fact knows that p as not knowing it. Thus, biases of our folk epistemology may lead to “wrongs done to someone specifically in their capacity as a knower” which is Miranda Fricker’s initial conception of epistemic injustice (Fricker 2007). At present, we do not know enough about whether folk epistemological biases interact with biases pertaining to gender, race or class. Here I think of On Folk Epistemology as providing part of a framework for further research on epistemic injustice.

Tuesday 20 February 2018

Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization

Today's post is by Jonathan Ellis, Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel, Professor of Philosophy at the University of California, Riverside. This is the second in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences, eds. J. F. Bonnefon and B. Trémolière, (Psychology Press, 2017) (part one can be found here).

Last week we argued that your intelligence, vigilance, and academic expertise very likely doesn't do much to protect you from the normal human tendency towards rationalization – that is, from the tendency to engage in biased patterns of reasoning aimed at justifying conclusions to which you are attracted for selfish or other epistemically irrelevant reasons – and that, in fact, you may be more susceptible to rationalization than the rest of the population. This week we’ll argue that moral and philosophical topics are especially fertile grounds for rationalization.

Here’s one way of thinking about it: Rationalization, like crime, requires a motive and an opportunity. Ethics and philosophy provide plenty of both.

Regarding motive: Not everyone cares about every moral and philosophical issue of course. But we all have some moral and philosophical issues that are near to our hearts – for reasons of cultural or religious identity, or personal self-conception, or for self-serving reasons, or because it’s comfortable, exciting, or otherwise appealing to see the world in a certain way.

On day one of their philosophy classes, students are often already attracted to certain types of views and repulsed by others. They like the traditional and conservative, or they prefer the rebellious and exploratory; they like confirmations of certainty and order, or they prefer the chaotic and skeptical; they like moderation and common sense, or they prefer the excitement of the radical and unintuitive. Some positions fit with their pre-existing cultural and political identities better than others. Some positions are favored by their teachers and elders – and that’s attractive to some, and provokes rebellious contrarianism in others. Some moral conclusions may be attractively convenient, while others might require unpleasant contrition or behavior change.

The motive is there. So is the opportunity. Philosophical and moral questions rarely admit of straightforward proof or refutation, or a clear standard of correctness. Instead, they open into a complexity of considerations, which themselves do not admit of straightforward proof and which offer many loci for rationalization.

These loci are so plentiful and diverse! Moral and philosophical arguments, for instance, often turn crucially on a “sense of plausibility” (Kornblith, 1999); or on one’s judgment of the force of a particular reason, or the significance of a consideration. Methodological judgments are likewise fundamental in philosophical and moral thinking: What argumentative tacks should you first explore? How much critical attention should you pay to your pre-theoretic beliefs, and their sources, and which ones, in which respects? How much should you trust your intuitive judgments versus more explicitly reasoned responses? Which other philosophers, and which scientists (if any), should you regard as authorities whose judgments carry weight with you, and on which topics, and how much?

These questions are usually answered only implicitly, revealed in your choices about what to believe and what to doubt, what to read, what to take seriously and what to set aside. Even where they are answered explicitly, they lack a clear set of criteria by which to answer them definitively. And so, if people’s preferences can influence their perceptual judgments (including possibly of size, color, and distance: Balcetis and Dunning 2006, 2007, 2010) what is remembered (Kunda 1990; Mele 2001), what hypotheses are envisioned (Trope and Liberman 1997), what one attends to and for how long (Lord et al. 1979; Nickerson 1998) . . . it is no leap to assume that they can influence the myriad implicit judgments, intuitions, and choices involved in moral and philosophical reasoning.

Furthermore, patterns of bias can compound across several questions, so that with many loci for bias to enter, the person who is only slightly biased in each of a variety of junctures in a line of reasoning can ultimately come to a very different conclusion than would someone who was not biased in the same way. Rationalization can operate by way of a series or network of “micro-instances” of motivated reasoning that together have a major amplificatory effect (synchronically, diachronically, or both), or by influencing you mightily at a crucial step (Ellis, manuscript).

We believe that these considerations, taken together with the considerations we advanced last week about the likely inability of intelligence, vigilance, and expertise to effectively protect us against rationalization, support the following conclusion: Few if any of us should confidently maintain that our moral and philosophical reasoning is not substantially tainted by significant, epistemically troubling degrees of rationalization. This is of course one possible explanation of the seeming intractability of philosophical disagreement.

Or perhaps we the authors of the post are the ones rationalizing; perhaps we are, for some reason, drawn toward a certain type of pessimism about the rationality of philosophers, and we have sought and evaluated evidence and arguments toward this conclusion in a badly biased manner? Um…. No way. We have reviewed our reasoning and are sure that we were not affected by our preferences....

Thursday 15 February 2018

Social Media and Youth Mental Health

On 14th November there was an interesting conference at the Royal Society of Medicine on the effects of social media on mental health.

Mary Aiken (University of College Dublin) discussed the Cyber Effect, her book which addresses the risks of social media on young people (cover below). Cyber space is a space and we need to consider the impact of it on vulnerable populations such as teens. We need to factor in developmental aspects (at what age should parents let children have a smartphone?). We need to recognise the continuous evolution of behaviour and as experts we need to drive policy initiatives and develop guidelines for parents and educators.

Jon Goldin (Great Ormond Street Hospital) talked about the risks and benefits of social media for young people. Children like using social media for different reasons: they use it for communication, to express themselves, to gain confidence, for popularity, for entertainment, to develop a sense of belonging, to receive information. Social media is risky for adolescents: it may cause a lack of sleep; due to anonymity, it encourages bad behaviour such as cyberbullying; it may facilitate gambling; and it can be used to research suicide methodology.

The worrying data suggest a correlation between social media use and mental health issues. There can be advantages to the use of social media such as ready availability of information, but the reasons to worry are greater than the reasons to be optimistic unless measures are taken to regulate the use of social media. One worry concerns anorexia nervosa: whereas some sources offer support there are sites inviting people to be anorexic and offering tips to avoid food. Another worry concerns child protection such as preventing grooming and sexting.

What are the possible solutions to these problems? There needs to be more education (sex education and internet security in schools). There needs to be an acknowledgement that social media has good effects, an open discussion about it with adolescents. Social media cannot be banned entirely but there needs to be boundaries, such as no more than two hours of social media a day and no social media in the bedroom after a certain time.

Tuesday 13 February 2018

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis, Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel, Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences, eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017).

We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. You try to point it out, but they deny it, and dig in more.

More formally, in recent work we have defined rationalization as what occurs when a person favors a particular view as a result of some factor (such as self-interest) that is of little justificatory epistemic relevance, and then engages in a biased search for and evaluation of justifications that would seem to support that favored view.

You, of course, never rationalize in this way! Or, rather, it doesn’t usually feel like you do. Stepping back, you’ll probably admit you do it sometimes. But maybe less than average? After all, you’re a philosopher, a psychologist, an expert in reasoning – or at least someone who reads blog posts about philosophy, psychology, and reasoning. You're especially committed to the promotion of critical thinking and fair-minded reasoning. You know about all sorts of common fallacies, and especially rationalization, and are on guard for them in your own thinking. Don't these facts about you make you less susceptible to rationalization than people with less academic intelligence, vigilance, and expertise?

We argue that no. You’re probably just as susceptible to post-hoc rationalization, maybe even more, than the rest of the population, though the ways it manifests in your reasoning may be different. Vigilance, academic intelligence, and disciplinary expertise are not overall protective against rationalization. In some cases, they might even enhance one’s tendency to rationalize, or make rationalizations more severe when they occur.

While some biases are less prevalent among those who score high on standard measures of academic intelligence, others appear to be no less frequent or powerful. Stanovich, West and Toplak (2013), reviewing several studies, find that the degree of myside bias is largely independent of measures of intelligence and cognitive ability. Dan Kahan finds that on several measures people who use more “System 2” type explicit reasoning show higher rates of motivated cognition rather than lower rates (2011, 2013, Kahan et al 2011). Thinkers who are more knowledgeable have more facts to choose from when constructing a line of motivated reasoning (Taber and Lodge 2006; Braman 2009). 

Nor does disciplinary expertise appear to be protective. For instance, Schwitzgebel and Cushman (2012, 2015) presented moral dilemma scenarios to professional philosophers and comparison groups of non-philosophers, followed by the opportunity to endorse or reject various moral principles. Professional philosophers were just as prone to irrational order effects and framing effects as were the other groups, and were also at least as likely to “rationalize” their manipulated scenario judgments by appealing to principles post-hoc in a way that would render those manipulated judgments rational.

Furthermore, since the mechanisms responsible for rationalization are largely non-conscious, vigilant introspection is not liable to reveal to the introspector that rationalization has occured. This may be one reason for the “bias blind spot”: People tend to regard themselves as less biased than others, sometimes even exhibiting more bias by objective measures the less biased they believe themselves to be (Pronin, Gilovich and Ross 2004; Uhlmann and Cohen 2005). Indeed, efforts to reduce bias and be vigilant can amplify bias. You examine your reasoning for bias, find no bias because of your bias blind spot, and then inflate your confidence that your reasoning is not biased: “I really am being completely objective and reasonable!” (as suggested in Erhlinger, Gilovich and Ross 2005). People with high estimates of their objectivity might also be less likely to take protective measures against bias (Scopeletti et al 2015).

Partisan reasoning can be invisible to vigilant introspection for another reason: it need not occur in one fell swoop, at a sole moment or a particular inference. Rather, it can be the result of a series or network of “micro-instances” of motivated reasoning (Ellis, manuscript). Celebrated cases of motivated reasoning typically involve a person whose evidence clearly points to one thing (that it’s their turn, not yours, to do the dishes) but who believes the very opposite (that it’s your turn). But motives can have much subtler consequences.

Many judgments admit of degrees, and motives can have impacts of small degree. They can affect the likelihood you assign to an outcome, or the confidence you place in a belief, or the reliability you attribute to a source of information, or the threshold for cognitive action (e.g., what would trigger your pursuit of an objection). They can affect these things in large or very small ways.

Such micro-instances (you might call it motivated reasoning lite) can have significant amplificatory effects. This can happen over time, in a linear fashion. Or it can happen synchronically, spread over lots of assumptions, presuppositions, and dispositions. Or both. If introspection doesn't reveal motivated reasoning that happens in one fell swoop, micro-instances are liable to be even more elusive.

This is another reason for the sobering fact that well-meaning epistemic vigilance cannot be trusted to preempt or detect rationalization. Indeed, people who care most about reasoning, or who have a high “need for cognition”, or who attend to their cognitions most responsibly, may be the most impacted of all. Their learned ability to avoid the more obvious types of reasoning errors may naturally come with cognitive tools that enable more sophisticated, but still unnoticed, rationalization.

Coming next week: Why Moral and Philosophical Disagreements Are Especially Fertile Grounds for Rationalization.

Thursday 8 February 2018

Challenges to Wellbeing

The workshop Challenges To Wellbeing: The Experience of Loneliness and Epistemic Injustice in the Clinical Encounter originated from a multi-disciplinary conversation about wellbeing and happiness. Exploring the theme of challenges to wellbeing, this conversation brought together academics from across the University, practitioners, and campaigners. The workshop was hosted by Lisa Bortolotti and Sophie Stammers for project PERFECT, and co-organised and funded by the Institute of Advanced Studies (IAS). It was held at the Centre for Professional Development (CPD) in the Medical School on the 22nd of November 2017. This is a detailed report on the talks given that day.

The workshop was divided in three sessions. Session One was dedicated to Themes from Project PERFECT.

Kathy Puddifoot started with an Introduction to Epistemic Injustice. She defined epistemic injustice and spoke about the different types that have been identified. Kathy explained that since the processes of giving and receiving knowledge are social, we rely on others for these to happen in a fair way. Epistemic injustice happens when people are wronged in their capacity as knowers and thus treated unfairly in these processes of knowledge exchange.

The first type of epistemic injustice, which was identified by Miranda Fricker, is testimonial injustice. Examples of this type of injustice are the stereotype that women depend on intuition or black people are athletic, not intelligent, or the stereotype that people with mental health issues are crazy. In cases of this type specific stereotypes determine the lack of credibility given to people from those groups.

Kathy then went on to describe the second type of epistemic injustice identified by Miranda Fricker, hermeneutical injustice. Kathy explained that people need specific resources (e.g. conceptual resources) to understand and articulate their own experiences. But as a result of how society is structured, some stigmatized groups can be denied of such resources and this puts them in a disadvantageous position within that society. Fricker’s example of this is the term sexual harassment, and Kathy added the example of postnatal depression. An important distinction here is lack of concepts within the person who has the experience and lack of conceptual resources within other people. Some members of groups have their own understanding but are not able to explain those experiences to non-group members.

The last type of epistemic injustice that Kathy talked about is testimonial silencing. Identified by Kristie Dotson, the two forms of this type of epistemic injustice are testimonial quieting (which happens when ‘an audience fails to identify the speaker as a knower’); and testimonial smothering (which happens when the speaker believes her testimony will be misinterpreted so she self-silences) Kathy said that the example Dalston gives of this second group is black domestic violence victims in the U.S. Kathy noted that epistemic injustice can be wilful and intentionally chosen or unintentional. Empirical findings suggest that implicit bias can lead people to avoid eye contact with members of certain groups. And this can cause members of these groups to silence themselves. Kathy argued that this could be a case of implicit bias leading to a form of epistemic injustice and testimonial smothering.

The second speaker was Alex Miller Tate. Alex talked about how issues of epistemic injustice emerge in the psychiatric encounter. He did so by discussing the paper Epistemic Injustice in Psychiatry by P. Crichton, H. Carel, and I. Kidd (2016). Alex explained that the focus in this paper is on testimonial injustice and that three points are argued. First, that it is psychiatric service users in their interactions with medical staff who are particularly vulnerable to testimonial injustice.

Second, a similar sort of injustice might emerge from the fact that medical professionals might enjoy undue credibility inflation. Third, two factors may contribute to these injustices: global prejudices about the mentally ill; and specific prejudices about people with specific diagnosis. The example Alex gave is people who are diagnosed with schizophrenia which are seen as intrinsically violent. This can contribute to undue credibility deflation of these people's testimonies. The recommendation made in the paper is for medical staff to develop specific dispositions to behave in ways that they actively seek to give respect and credibility to patients’ testimonies.

Alex raised two questions on this recommendation. First, stigma and prejudice are thought as extending beyond individual interactions in the psychiatric encounter, and are thought in terms of social structure. Alex argued that amongst structural failings we can find the lack of what he called socially accepted markers of credibility that psychiatric service users have. An instance of these markers is the credibility we generally give to people who are well dressed and whose personal hygiene is impeccable.

This quality can be lost in people with specific psychiatric issues. Alex argued that these markers are not shared widely enough. And even when there is nothing wrong with using markers of credibility, injustice can be perpetuated when people use them in the clinical encounter. He argued that service users might lack access to the kinds of epistemic resources that are necessary to make sense of their experience. Here the problem seems to go beyond the responsibility of the clinician. It seems to relate to how knowledge in psychiatry is produced and disseminated to the wider population. The final point in relation to epistemic resources was that it might be the case that service users develop resources themselves. This can be illustrated by service user movements. But attempts to communicate these insights to clinicians are often undervalued.

The second question Alex raised was whether service users are vulnerable to other forms of epistemic injustice apart from the ones identified by Crichton, Carel and James. Alex argued that at times the testimony involved is risky. This happens for various reasons one of them being that there is a chance of harm if the testimony is misinterpreted; and another that psychiatric professionals have power over service users which they can use appropriately or not. These are the powers of detention, the powers of enforcing treatment, powers of giving or refusing care. All of these different kinds of risks, especially in recent history and crisis care, Alex argued, are present when there is testimony about symptoms or experiences involved.

I gave the third talk On the Experience of Loneliness and Solitude. The first part of the talk was about the experience of Loneliness. I provided a review of the existing empirical research, raised some philosophical questions and concerns, and talked about loneliness in adolescence. I dedicated the second part of the talk to solitude and some of its benefits.

I started talking about how the research that has been done over the past decades has shown the influence that loneliness has on our mental and physical health. And referred to a Meta-Analysis of Interventions to Reduce Loneliness (Masi, Cheng, Hawkley and Cacioppo, 2011) where four different strategies used in interventions for alleviating loneliness are identified.

Tuesday 6 February 2018

"Me and I are not friends"

Today's post is by Dr Pablo López-Silva, who is Lecturer in Psychology at the Faculty of Medicine of the Universidad de Valparaíso in Chile. He is the director of the 3-years FONDECYT Research Project titled 'The Agentive Architecture of Human Thought' granted by the National Commission for Scientific and Technological Research of the Government of Chile. 

Pablo López-Silva currently works on the philosophy of mind, clinical psychiatry, and psychopathology with a special focus on the way mental pathologies and empirical research inform our understanding of the nature of consciousness.

Self-awareness i.e. the awareness we have of being the subject of our own experience is, perhaps, one of the most elusive elements of human mind. A common idea within current philosophy of mind is that the awareness we have of different external and internal experiences might necessarily involve a degree of self-awareness. In other words, every time you reach a cup, read a book, and so on, you enjoy a degree of awareness of yourself as the one who is doing the reaching, reading, etc. Although such an idea sounds highly intuitive, philosophers disagree on the ways in which the link between our awareness of our experiences and our self-awareness is established.

A very specific group of philosophers has suggested that a sense of mineness intrinsically contained in the qualitative structure of all conscious experiences is a necessary condition for a subject to become aware of himself as the subject of his experiences. Thus, on this view, consciousness necessarily entails phenomenal self-awareness.

In my last paper titled 'Me and I are not friends, just acquaintances: On thought insertion and self-awareness' I first argue that cases of delusions of thought insertion undermine this claim and that such a phenomenal feature plays little role in accounting for the most minimal type of self-awareness entailed by phenomenal consciousness. Patients suffering from thought insertion report the belief that external agents of heterogeneous nature have placed thoughts into their minds/heads. I’m aware of the fact that my strategy for evaluating this argument is not new in philosophy.

As a second step, I offer a systematic evaluation of all the strategies used by the defenders of this view to deal with the challenge from thought insertion. Finally, I conclude that most of these strategies are unsatisfactory for they rest in unwarranted premises, imprecisions about the agentive nature of cognitive experiences, and especially, lack of distinction between the different ways in which subjects can become aware of their own thoughts.

For further questions and comments, just drop me an email!

Thursday 1 February 2018

PERFECT 2018 Confabulation Workshop

On Wednesday 23rd May, PERFECT will host its third annual workshop, at St Anne’s College, Oxford. This year, our topic is confabulation, and we’re excited to welcome leading researchers in the field for a stimulating programme of presentations.

The talks explore a number of philosophical issues arising from confabulation, and will be of interest to philosophers of mind, philosophers of psychology and epistemologists. Papers to be presented also examine confabulation in relation to wider research programmes in cognitive science and psychiatry, and so we also welcome researchers from all disciplines of the mind who are interested in how we give accounts of our experiences, choices and actions.

The speakers will address a range of issues, with some exploring an aspect of confabulation that is underdeveloped or has been overlooked in previous work, whilst others propose a new model of the phenomenon that helps to explain and bring clarity to existing observations.

In her talk, Sarah Robins will focus on the possibility of veridical confabulation and how this possibility pushes us to clarify successful remembering and explaining, as well as what it tells us about the nature and extent of the phenomenon of confabulation itself.

William Hirstein will consider how recent empirical research counts against the standard model of Capgras syndrome, a condition in which patients confabulate about their loved ones. He will offer an alternative to the standard interpretation of the condition, arguing that this is a better fit with the evidence.

Derek Strijbos and Leon de Bruin extend previous work undertaken on the “mindshaping” interpretation of confabulation as an alternative to the mainstream “mindreading” view in their talk, offering an exploration of the relevant folk psychological norm that underwrites this interpretation.

I’ll look at confabulation in the context of a more general faculty for organising experience into a coherent narrative, and how interventions to reduce confabulation must account for the epistemic benefits of this faculty as revealed by other research programmes in cognitive science.

In her talk, Louise Moody argues that the notion of confabulation is central to understanding key characteristics of the phenomenology of dreaming, and demonstrates that this revisionary theory of dreams reveals new insights into confabulation.

I hope you’ll join us in May! For the full programme, further information, and to register, please follow this link.