Thursday, 21 March 2019

The Ontology of Emotions

Today's post is written by Hichem Naar and Fabrice Teroni. In this post, Hichem and Fabrice present their new edited volume The Ontology of Emotions, recently published by Cambridge University Press.

Hichem Naar is Assistant Professor in philosophy at the University of Duisburg-Essen, a member of the Philosophical Anthropology and Ethics Research Group, and an associate member of the Thumos research group, the Genevan research group on emotions, values and norms hosted by the CISA, the Swiss Centre for Affective Sciences. Hichem currently works on the nature, value, and normative significance of various attitudes, including emotions.

Fabrice Teroni is Associate Professor in philosophy at the University of Geneva and co-director of Thumos. He works in the philosophy of mind and epistemology. He is also interested in the nature of emotions elicited by fiction, in the involvement of the self in emotions as well as in the phenomenology of memory. 



What kind of thing is an emotion? No one will seriously doubt that it is a psychological entity of some sort. There is also widespread agreement among philosophers regarding some of the features that are exemplified by emotions – they are felt, relatively short-lived, and directed to the world.

Recent work on the nature of emotions has almost exclusively focused on identifying their necessary features. Are emotions necessarily related to motivation? Do they necessarily have an object? What is the relationship between the phenomenal character of an emotion and its representational content? Are emotions themselves in addition, say, to the beliefs that may cause them or the actions that they themselves promote – assessable as justified, appropriate or rational?

While important in their own right, these questions have been so prominent in recent debates that, regrettably, very little attention has been devoted to identifying the general ontological category to which emotions belong.

Are emotions kinds of events? Their being short-lived may suggest so. But emotions are perhaps kinds of processes. Saying so would at least allow one to do justice to the idea that emotions have components or stages. Yet another possible position is that emotions are dispositions of some sort, suggesting a specific way of explaining their link with behaviour. 

Tuesday, 19 March 2019

I Err, Therefore I Think


Today's post is by Krystyna Bielecka (pictured above), assistant professor in the Institute of Philosophy at the University of Warsaw. For her PhD, Krystyna investigated the concept of mental representation and its use in philosophy and cognitive sciences. Her PhD thesis entitled, Błądzę, więc myślę. Co to jest błędna reprezentacja? (in English, “I err, therefore I think. What is misrepresentation?”), was awarded the Jerzy Perzanowski Prize by Jagellonian University (Poland) for the best PhD Thesis in Cognitive Science in 2016.

Recently Krystyna has obtained a research grant from the the National Science Centre (Poland) to pursue her research interests in the application of the concept of mental representation to certain psychopathologies. In the project, she asks whether certain mental illnesses, such as OCD, psychoses, or certain symptoms of mental disorders, such as confabulations or cognitive and emotional impairments of empathy, are necessarily representational and when it is reasonable to explain them without using the concept of mental representation.

Philosophers of mind and cognitive scientists discuss the nature of thoughts, how they acquire content, and what it means that thoughts correspond to reality. The modern debate over contentful thoughts, or mental representations, is dominated by the question of how to naturalize content. For example, Daniel Hutto and Eric Myin are skeptical whether any naturalistic theory of mental representation could ever naturalize contents understood as satisfaction conditions (which is what they dub “The Hard Problem of Content”).

In my book Błądzę, więc myślę. Co to jest błędna reprezentacja? (I err, therefore I think. What is misrepresentation?), I argue for the significance of the possibility to make representational errors. In contrast to philosophers for whom the possibility of misrepresentation is a problematic (such as Donald Davidson, Gilbert Harman, Jerry Fodor) or even an unnecessary (Mark Perlman) feature of mental representation, I argue that the possibility to make errors detectable by the cognitive system itself is a sign that the cognitive system has access to the contents of its mental representations. 

The detectability of errors is a necessary condition for the further possibility to correct them, which is fundamental for learning. Furthermore, the argument for the possibility of misrepresentation is based on the premise of overall rationality of cognitive agents - only if they can have access to the content of their mental states, they can recognize their mistakes in order to correct them.

Thursday, 14 March 2019

The Misinformation Age: how false beliefs spread

       
        


Today's post is written by Cailin O'Connor and James Owen Weatherall. In this post, they present their new book The Misinformation Age: How False Beliefs Spread, edited by Yale University Press.

Cailin O’Connor is a philosopher of science and applied mathematician specializing in models of social interaction. She is Associate Professor of Logic and Philosophy of Science and a member of the Institute for Mathematical Behavioral Science at the University of California, Irvine. 


James Owen Weatherall is a philosopher of physics and philosopher of science. He is Professor of Logic and Philosophy of Science at the University of California, Irvine, where he is also a member of the Institute for Mathematical Behavioral Science.   

Risultati immagini per The misinformation age: how false beliefs spread


Since early 2016, in the lead-up to the U.S. presidential election and the Brexit vote in the UK, there has been a growing appreciation of the role that misinformation and false beliefs have come to play in major political decisions in Western democracies. (What we have in minds are beliefs such as that vaccines cause autism, that anthropogenic climate change is not real, that the UK pays exorbitant fees to the EU that could be readily redirected to domestic programs, or that genetically modified foods are generally harmful.)

One common line of thought on these events is that reasoning biases are the primary explanation for the spread of misinformation and false belief. To give an example, many have pointed out that confirmation bias – the tendency to take up evidence supporting our current beliefs, and ignore evidence disconfirming them – plays an important role in protecting false beliefs from disconfirmation.

In our recent book, The Misinformation Age: How False Beliefs Spread, we focus on another explanation of the persistence and spread of false belief that we think is as important as individual reasoning biases, or even more so. In particular, we look at the role social connections play in the spread of falsehood. In doing so we draw on work, by ourselves and others, in formal social epistemology. This field typically uses mathematical models of human interaction to study questions such as: how do groups of scientists reach consensus? What role does social structure play in the spread of theories? How can industry influence public beliefs about science?

Throughout the book, we use historical cases and modeling results to study how aspects of social interaction influence belief. First, and most obviously, false beliefs spread as a result of our deep dependence on other humans for information. Almost everything we believe we learn from others, rather than directly from our experience of the world. This social spread of information is tremendously useful to us. (Without it we would not have culture or technology!) However, it also creates a channel for falsehood to multiply. Until recently, we all believed the appendix was a useless evolutionary relic. Without social information, we wouldn’t have had that false belief.

Second, given our dependence on others for information, we have to use heuristics in deciding whom to trust. These heuristics are sometimes good ones – such as trusting those who have given us useful information in the past. Sometimes, though, we ground trust on things like shared identity (are we both in the same fraternity?) or shared belief (do we both believe homeopathy works?) As we show, the latter in particular can lead to persistent polarization, even among agents who seek for truth and who can gather evidence about the world. This is because when actors don’t trust those with different beliefs, they ignore individuals who gather the very evidence that might improve their epistemic state.

Tuesday, 12 March 2019

Epistemic Innocence and the Overcritical Juror



Should we trust eyewitnesses of crimes? Are jurors inclined to trust eyewitnesses more than they should? People tend to adopt a default position of trust towards eyewitness testimony, finding it highly convincing. However, as has now been widely acknowledged, eyewitnesses are subject to memory errors, which make them susceptible to error. These two observations have pointed many researchers towards the conclusion that jurors do trust eyewitnesses more than they should.

However, in a recent paper, I argue that jurors are susceptible to being overcritical, assigning too little credence to eyewitness testimony, due to the presence of memory errors. How can this be so?

Jurors might adopt a default position of trust towards eyewitness testimony, but they are also prone to assuming that an eyewitness is generally unreliable due to noticing individual errors in their testimony. For example, mock jurors are unlikely to base a judgement of guilt or innocence on testimony containing inconsistencies, even if the inconsistencies relate to trivial information that would not determine guilt or innocence (Hatvany and Strack 1980; Berman and Cutler 1996; Berman et al. 1995).

These individuals infer from the presence of errors in some trivial details to general unreliability of the testimony. My suggestion is that often inferences of this sort will be incorrect: people will make errors in their eyewitness testimony but the errors will not indicate general unreliability, instead being due to the ordinary operation of reliable cognitive mechanisms. Not only this, the errors will indicate the presence of ordinary, well-functioning cognitive mechanisms, which in fact facilitate people being good, trustworthy eyewitnesses.

Thursday, 7 March 2019

What Beauty Demands: An Interview with Heather Widdows

Today I have the pleasure to post an interview with my colleague Heather Widdows, John Ferguson Professor of Global Ethics at the University of Birmingham, who talks to us about her research interest in beauty and her very successful monograph, Perfect Me: Beauty as an Ethical Ideal.



LB: Your project examines beauty from a new angle. How did you first become interested in beauty as an ethical ideal?

HW: That’s a difficult question to answer as my passion for researching beauty crept up on me. Before working on beauty I was a fairly typical moral philosopher working in global ethics and justice. My main topic was defining global ethics as an multidisciplinary approach to philosophy, taking the real world and empirical evidence seriously. More broadly, I have worked on areas such as women’s rights, reproductive rights, genetic ethics and bioethics.

I guess my interest in beauty emerged from this long standing interest in gender justice. I recognised that something was happening in visual and virtual culture which was different, profoundly moral and no less connected to justice than other issues of health and wealth I had been working on. 

The challenges of body image anxiety as a global epidemic is an issue of global concern. Likewise, the extent to which the modified body is becoming regarded as normal, and even natural, challenges our understandings of what human beings are, and of the self, at least as much as advances in genetic technology or the emerging possibilities of Artificial Intelligence do. 

Such profound changes about our understanding of human beings, brought on by the emerging dominant beauty ideal, are not well recognised or researched. Perhaps it’s because beauty is seen as trivial, a matter of taste, or a ‘woman’s issue’ that we don’t take it seriously. But in a visual and virtual culture beauty matters, and it matters fundamentally. It provides our values and we judge ourselves, and others, according to it.

LB: In your recent monograph, Perfect Me, you argue that pressure on women to be perfect has increased and is now ‘more global’. What do you think is the reason for this increased pressure, and what makes you say that the preoccupation with beauty is more than a ‘first-world problem’?

HW: In Perfect Me I set out why the current beauty ideal – characterised by thinness, smoothness, firmness and youth – is now an emerging global ideal. This does not mean we all have to look the same, or even similar, but we do have to fall within a certain range. And while diversity might be locally true, globally it is not. Globally, the range of acceptable appearance norms for the face and the body narrows and becomes more demanding.

So while it might seem there is more diversity – more shades and colours of skin, and more shapes and sizes of models are visible – this is diversity within a very small range. To be beautiful – or just good enough – you must conform to most of the features of the beauty ideal. You can be big, and very big, only if you are also firm and smooth.

Yet firm curves are more demanding than thinness alone. And you can be hairy – look at Januhairy – but can you be both? Can you be fat and hairy and saggy and old? You cannot! As I say in Perfect Me ‘muffin tops’ and ‘love handles’ are not features of any version of thinness.

Evidencing the global nature of the ideal is the main focus of Chapter 3, ‘A New (Miss) World Order?’. In this chapter I document the narrowing of the normal range everywhere and the emergence of a global mean. The global beauty ideal is one of thinness in some form (catwalk thin, thin with curves), firm (buff, shapely, athletic), smooth (hairless, with golden, bronze or coffee-coloured skin) and young-looking. 

This is not a mere expansion of Western ideals, but a global ideal, which is demanding of all racial groups. No racial group is good enough without ‘help’ – all need to be changed or added to. Everybody needs body work – diet and exercise, surgical and non-surgical technical fixes – to be ‘perfect’, or just ‘good enough’. While not all can, or afford to engage – all can aspire to. Poverty is no barrier to aspiration and I use the evidence of engagement in affordable trends (such as seeking thinness or using skin-lightening cream) as indicating engagement and aspiration, and supporting the global trend.

Tuesday, 5 March 2019

Contributory Injustice in Psychiatry

This post is by Alex Miller Tate, who works in the philosophy of the cognitive sciences, and is currently completing a PhD at the University of Birmingham. Here, he summarises his paper "Contributory Injustice in Psychiatry" recently published in the Journal of Medical Ethics.




Significant service user involvement in the provision of and decisions surrounding psychiatric care (both for themselves as individuals and in the formation of policy and best practice) is, generally-speaking, officially supported by members of the medical profession (see e.g. Newman et al 2015; Tait & Lester 2005). Service user advocacy organisations and others, however, note that the experience of service users (especially in primary care) is of having their beliefs about, feelings regarding, and perspectives on their conditions ignored or otherwise thoughtlessly invalidated. Some deleterious consequences of this have been noted before, including impoverished clinical knowledge of mental health conditions and worse health outcomes for service users (see e.g. Simpson & House 2002).

Not much attention has been paid to the structure and nature of these practices of exclusion themselves, however, until relatively recently. In the past couple of years there has been a small surge of work from both philosophers and practicing psychiatrists (sometimes in collaboration) identifying various kinds of epistemic injustice experienced by psychiatric service users and evaluating their significance (see e.g. Crichton, Carel & Kidd 2016; Kurs & Grinshpoon 2017Johnstone & Boyle 2018). Epistemic injustices are those which harm people specifically in their capacity as knowers (Fricker 2007). My article introduces the notion of contributory injustice (due to Dotson 2012 and a lesser-studied sub-type of epistemic injustice) to this evolving field of discussion.

To understand the notion of contributory injustice, we must first appreciate the notions of a) an interpretive resource and b) an interpretive gap. An interpretive resource is something that we use to help us make sense of the world; our collections of concepts, our lexicons, and our methods of investigating the world to obtain knowledge (amongst other things) are all important interpretive resources (Pohlhaus 2012). An interpretive gap is present when we lack some resource/s that would help us to obtain a better understanding of some phenomenon or state of affairs.

There are at least two ways in which interpretive gaps may lead straightforwardly to epistemic injustice. The first is when a whole society’s pool of shared interpretive resources lacks those required to make sense of (some of) a marginalised group’s day-to-day experiences. In such a case, these experiences remain somehow ephemeral, or otherwise difficult or impossible to properly capture, from the perspective of both the marginalised and the dominant parties alike. This is Fricker’s (2007) notion of hermeneutical injustice. The second (and in my view more common) situation is when a dominant group has an interpretive gap regarding a marginalised group’s experiences, which the marginalised individuals have already identified and overcome within their own community. In such a case, insights that individuals themselves have into their own experiences are ignored and persistently misunderstood by ignorant others. This is Dotson’s (2012) notion of contributory injustice.

In my article, I argue that psychiatric service users, in particular those who hear voices, are regularly subject to contributory injustice when interacting with clinicians. I draw on the work of the Hearing Voices Network (an organisation dedicated to open and welcoming discussion of all perspectives on voice-hearing, centering on those who experience it) to argue that the harm done by this is both significant and readily avoidable. I argue that clinicians are obliged to take seriously the potential therapeutic benefit of service users’ individual, and sometimes unique, perspectives on and explanations of voice-hearing. I suggest that this is especially important where these perspectives and explanations are alien to them, overtly strange, or otherwise contrary to the dominant medical understanding of psychological distress. In so doing, clinicians will begin to treat their service users both more justly and as they actually are; indispensably knowledgeable and equal partners in the search for a helpful resolution of their difficulties, rather than an object of clinical investigation, diagnosis, and intervention.

Alex’s website, where interested parties can keep updated on his current research, and where he hopes to soon begin regularly blogging on a variety of philosophical topics, is here. Those who appreciate silly jokes and the occasional bit of philosophical insight (usually from somebody else) are welcome to follow Alex on twitter.

Thursday, 28 February 2019

Remembering from the Outside: Personal Memory and the Perspectival Mind

Christopher McCarroll is a Postdoctoral Researcher at the Centre for Philosophical Psychology, University of Antwerp. He works on memory and mental imagery, with a particular interest in perspective in memory imagery. In this blog post Chris talks about his recently published book Remembering From the Outside: Personal Memory and the Perspectival Mind.




In his 1883 study into psychological phenomena, Francis Galton described varieties in visual mental imagery. Writing about the fact that some people "have the power of combining in a single perception more than can be seen at any one moment by the two eyes", Galton notes that "A fourth class of persons have the habit of recalling scenes, not from the point of view whence they were observed, but from a distance, and they visualise their own selves as actors on the mental stage" (1883/1907: 68-69). Such people remember events from-the-outside. In the language of modern memory research such images are known as ‘observer perspective memories’. Not everybody has such imagery, but are you one of Galton’s ‘fourth class of persons’? Do you recall events from-the-outside?

This perspectival feature of memory is a puzzling one, and it raises many questions. If the self is viewed from-the-outside, then who is the observer, and in what way is the self observed? Are such memories still first-personal? What is the content of such observer perspective memories? How can I see myself in the remembered scene from a point of view that I didn’toccupy at the time of the original event? Indeed, can such observer perspectives be genuine memories? In the book I provide answers to such questions about perspective in personal memory.

There is now a broad consensus that personal memory is (re)constructive, and some of the puzzles of remembering from-the-outside can be explained by appealing to this feature of memory. Indeed, it is often suggested that observer perspectives are the products of reconstruction in memory at retrieval. But this, I suggest, is only part of the story. To better understand observer perspectives in particular, and personal memory more generally, we need to look not only at the context of retrieval, but also at the context of encoding. 

Tuesday, 26 February 2019

Response to Ben Tappin and Stephen Gadsby

In this post, Daniel Williams, Postdoctoral Researcher in the Centre for Philosophical Psychology at the University of Antwerp, responds to last week's post from Ben Tappin and Stephen Gadsby about their recent paper "Biased belief in the Bayesian brain: A deeper look at the evidence". 


Ben Tappin and Stephen Gadsby have written an annoyingly good response to my paper, ‘Hierarchical Bayesian Models of Delusion’. Among other things, my paper claimed that there is little reason to think that belief formation in the neurotypical population is Bayesian. Tappin and Gadsby—along with Phil Corlett, and, in fact, just about everyone else I’ve spoken to about this—point out that my arguments for this claim were no good.

Specifically, I argued that phenomena such as confirmation bias, motivated reasoning and the so-called “backfire effect” are difficult to reconcile with Bayesian models of belief formation. Tappin and Gadsby point out that evidence for the backfire effect suggests that it is extremely rare, that confirmation bias as traditionally understood can be reconciled with Bayesian models, and that almost all purported evidence of motivated reasoning can be captured by Bayesian models under plausible assumptions.

To adjudicate this debate, one has to step back and ask: what kind of evidence *would* put pressure on Bayesian models of belief formation? Unfortunately this debate is often mired by reference to concepts like logical consistency and inconsistency (i.e. falsification), which are largely irrelevant to science. (In fact, they are profoundly un-Bayesian). As I mentioned in my paper, with suitable adjustments to model parameters, Bayesian models can be fitted to—that is, made logically consistent with—any data. 

The question is: which possible evidence should *weaken our confidence* in Bayesian models? Fortunately, Tappin and Gadsby don’t hold the view—surprisingly widespread in this debate—that there is nothing we could discover which should weaken our confidence in them. They concede, for example, that any genuine evidence for “motivated reasoning constitutes a clear challenge… to the assumption that human belief updating approximates Bayesian inference.”

Thursday, 21 February 2019

Belief and Belief Formation Workshop

The Centre for Philosophical Psychology at the University of Antwerp held a workshop on the 27th November 2018 on the topic of belief and belief formation. Here’s a brief summary of the excellent talks given at the workshop, kindly written by Dan Williams.




Neil Levy (Oxford/Macquarie) gave the first talk, entitled ‘Not so hypocritical after all: how we change our minds without noticing’. Levy focused on a phenomenon that many people assume to be a form of hypocrisy—namely, cases in which individuals come to change their beliefs about, say, politics when popular opinion (or the popular opinion within their relevant tribe or coalition) changes. (Levy gave the example of many ‘Never Trumpers’ who then apparently changed their opinion of Trump when he came to power).

Levy argued that at least some examples of this phenomenon are in fact not best understood as a form of hypocrisy; rather, they arise from people forming beliefs “rationally”. Specifically, he drew attention to two important features of human belief formation: first, our evolutionary dependence on cumulative cultural evolution, and the suite of psychological mechanisms that facilitate the cultural learning that underlies it; second, the way in which we offload representational states such as beliefs onto the surrounding environment. 

These two features of human psychology, Levy argued, can help to explain many apparent examples of hypocrisy: when an individual radically changes his or her opinion on, say, Trump, this need not be an example of motivated reasoning, “tribalism”, or hypocrisy; rather, it can simply be a result of these—usually adaptive and truth-tracking—features of human psychology.

Eric Mandelbaum (CUNY) gave the second talk on ‘The Fragmentation of Belief’. Mandelbaum sought to develop a form of “psychofunctionalism”, according to which beliefs are best understood as real entities within the mind that play the functional role of beliefs as described by our best contemporary cognitive science. 

Psychofunctionalism has traditionally been held back, Mandelbaum argued, by the lack of concrete proposals on what the relevant psychological laws or regularities that actually govern belief formation consist in. To address this, Mandelbaum sought to sketch a cognitive architecture, focusing specifically on the issue of how beliefs are stored. 

At the core of his proposal was the idea that belief storage is highly fragmented; rather than a unified web of belief, he argued that our best research in cognitive science supports a view of our cognitive architecture as consisting of many distinct, independently accessible data structures which Mandelbaum calls ‘fragments’. 

This architecture, Mandelbaum argued, generates many psychological phenomena that standard “web of belief”-based theories struggle to account for, such as inconsistent beliefs, redundant beliefs, and distinct bodies of information on the same subject.

Tuesday, 19 February 2019

Biased Belief in the Bayesian Brain

Today’s post comes from Ben Tappin, PhD candidate in the Morality and Beliefs Lab at Royal Holloway, University of London, and Stephen Gadsby, PhD Candidate in the Philosophy and Cognition Lab, Monash University, who discuss their paper recently published in Consciousness and Cognition, “Biased belief in the Bayesian brain: A deeper look at the evidence”.



Last year Dan Williams published a critique of recently popular hierarchical Bayesian models of delusion, which generated much debate on the pages of Imperfect Cognitions. In a recent article, we examined a particular aspect of Williams’ critique. Specifically, his argument that one cannot explain delusional beliefs as departures from approximate Bayesian inference, because belief formation in the neurotypical (healthy) mind is not Bayesian.

We are sympathetic to this critique. However, in our article we argue that canonical evidence of the phenomena discussed by Williams—in particular, evidence of the backfire effect, confirmation bias and motivated reasoning—does not convincingly demonstrate that neurotypical belief formation is not Bayesian.

The backfire effect describes the phenomenon where people become more confident in a belief after receiving information that contradicts that belief. As pointed out by Williams, this phenomenon is problematic for Bayesian models of belief formation insofar as new information should cause Bayesians to go towards the information in their belief updating, never away from it. (As an aside, this expectation is incorrect, e.g., see here or here).

We reviewed numerous recent studies where conditions for backfire were favourable (according to its theoretical basis), and found that observations of backfire were the rare exception—not the rule. Indeed, the results of these studies showed that by-and-large people updated their beliefs towards the new information, even if it was contrary to their prior beliefs and in a highly emotive domain.

Thursday, 14 February 2019

Self-control, Decision Theory, and Rationality

This post is written by José Luis Bermúdez, who is Professor of Philosophy and Samuel Rhea Gammon Professor of Liberal Arts at Texas A&M University. Prof. Bermúdez has published seven single-author books and six edited volumes. His research interests are at the intersection of philosophy, psychology and neuroscience, focusing particularly on self-consciousness and rationality. 

In this post, he presents his new edited collection "Self-Control, Decision Theory, and Rationality" published by Cambridge University Press. 



Is it rational to exercise self-control? Is it rational to get out of bed to go for a run, even when staying in bed seems preferable at the time? To resist the temptation to have another drink? Or to forego a second slice of cake?

From a commonsense perspective, self-control is a way of avoiding weakness of will, and succumbing to weakness of will seems to be a paradigm of irrationality – something that involves a distinctive type of inconsistency and practical failure. This reflects a focus on rationality in choices over time – on keeping one’s commitments and following through on one’s plans.

But things can look very different when one narrows down to specific, individual choices. Then rational self-control seems hard to accommodate. After all, to exercise self-control is to go against your strongest desires at the moment of choice – and why should you not take what seems to be the most attractive option? From the perspective of orthodox decision theory, rationality requires you to maximize expected utility and (at the moment of choice) being weak-willed is what maximizes expected utility.

Tuesday, 12 February 2019

OCD and Epistemic Anxiety

This post is authored by Juliette Vazard, a PhD candidate at the Center for Affective Sciences at the University of Geneva, and at the Institut Jean Nicod at the Ecole Normale Supérieure in Paris. In this post she discusses her paper “Epistemic Anxiety, Adaptive Cognition, and Obsessive-Compulsive Disorder” recently published in Discipline Filosofiche.


I am curious about what certain types of dysfunctional epistemic reasoning present in affective disorders might reveal about the role that emotions play in guiding our epistemic activities. Recently, my interest was drawn to the emotion of anxiety. Anxiety has often been understood as belonging to the domain of psychopathology, and the role of this emotion in the everyday lives of healthy individuals has long remained understudied. In this article I argue that anxiety plays an important role in guiding our everyday epistemic activities, and that when it is ill-calibrated, this is likely to result in maladaptive epistemic activities.

Anxiety is felt towards dangers or threats which are not immediately present, but could materialize in nearby possible worlds or in the future. Like other emotions, anxiety plays a motivational role in preparing us to act in response to the type of evaluation it makes. Because anxiety functions to make “harmful possibilities” salient, it prompts a readiness to face potential threats, as well as attempts to gain information (regarding its chances of materializing, its magnitude, specific nature, etc.)

I believe analyzing the nature and role of anxiety can enlighten us on the dysfunctional mechanisms at work in obsessive-compulsive disorder. OCD is a psychiatric disorder that most often implies obsessions “which are intrusive, unwanted thoughts, ideas, images, or impulses” and compulsions, which are “behavioural or mental rituals according to specified ‘rules’ or in response to obsessions” (Abramowitz, McKay, Taylor 2008, p. 5). Most interestingly, persons with OCD experience the need to secure more evidence and demand more information before they can reach a decision and claim knowledge (that the stove is off, for instance) (Stern et al. 2013; Banca et al. 2015).

Thursday, 7 February 2019

Epistemic Innocence at ESPP

In September 2018, a team of Birmingham philosophers, comprising Kathy Puddifoot, Valeria Motta, Matilde Aliffi, EmaSullivan-Bissett and myself, were in sunny Rijeka, Croatia, to talk a whole lot of Epistemic Innocence at the European Society for Philosophy and Psychology.

Epistemic innocence is the idea at the heart of our research at Project PERFECT. A cognition is epistemically innocent if it is irrational or inaccurate and operates in ways that could increase the chance of acquiring knowledge or understanding, where alternative, less costly cognitions that bring the same benefits are unavailable. Over the last few years, researchers on the project and beyond have investigated the implications of epistemic innocence in a range of domains (see a list of relevant work here). Our epistemic innocence symposium at ESPP2018 was a mark of the relative maturity of the concept, and the opportunity for us to start expanding its applications.
           
I went first, exploring the phenomenon of confabulation, where a person gives an explanation that is not grounded in evidence, without any intention to deceive. Confabulatory explanations sometimes arise where there is cognitive decline, such as in dementia or brain injury, and also in a number of psychiatric conditions. But there are a range of studies which demonstrate that all of us, regardless of our cognitive function, regularly confabulate about all sorts of things from consumer choices to moral convictions and political decisions. 

Tuesday, 5 February 2019

The Epistemological Role of Recollective Memories

Today’s post is by Dorothea Debus, Senior Lecturer in the Department of Philosophy at the University of York.


Together with Kirk Michaelian and Denis Perrin I've recently edited a collection of newly commissioned papers in the philosophy of memory (New Directions in the Philosophy of Memory, Routledge 2018), and I've been invited to say something about my own contribution to that collection here.

My paper bears the title "Handle with Care: Activity, Passivity, and the Epistemological Role of Recollective Memories", and it is concerned with one particular type of memory, namely with memories that have experiential characteristics. The paper starts from the observation that such experiential or 'recollective' memories (here: 'R-memories') have characteristic features of activity as well as characteristic features of passivity

A subject who experiences an R-memory is characteristically passive with respect to the occurrence of the R-memory itself, but subjects nevertheless also can be, and often are, actively involved with respect to their R-memories in various ways. At the same time, R-memories also play an important epistemological role in our everyday mental lives: When making judgements about the past, we often do rely on our R-memories of relevant past events, and it also seems that compared to other kinds of memories, we take R-memories especially seriously and give them special weight and particular attention when making judgements about the past.

What is more, there are important links between the epistemological role which R-memories play on the one hand, and our R-memories' characteristic features of passivity and activity on the other, and in the paper at hand I suggest that we can understand both these aspects of R-memory better by setting out to understand them together.

Thursday, 31 January 2019

Inner Speech: New Voices

 Today's post is written by Peter Langland-Hassan and Agustin Vicente. Peter Langland-Hassan is Associate Professor of Philosophy at the University of Cincinnati

Agustin Vicente is Ikerbasque Research Professor at the University of the Basque Country, Linguistics Department. In this post, they present their new edited volume"Inner Speech: New Voices". 



Our new anthology, Inner Speech: New Voices (OUP, 2018), is the first in philosophy to focus on inner speech—a phenomenon known, colloquially, as “talking to yourself silently” or “the little voice in the head.” The book is interdisciplinary in spirit and practice, bringing together philosophers, psychologists, and neuroscientists to discuss the multiple controversies surrounding the nature and cognitive role of the inner voice.

Readers of this blog may be most familiar with theoretical work on inner speech as it occurs in the context of explaining Auditory Verbal Hallucinations (AVHs) in schizophrenia. Building on and amending early work by Christopher Frith (1992), a number of theorists have proposed that AVHs result from a deficit in the generation or monitoring of one’s own inner speech. Our book includes several chapters by well-known participants in those debates—including Hélène Loevenbruck and colleagues, Sam Wilkinson & Charles Fernyhough, Lauren Swiney, and Peter Langland-Hassan—that push the leading theories into new territory.

Stepping back, as philosophers of mind, it has always been surprising to us how little direct attention inner speech receives in philosophy and psychology. From a pre-theoretical, commonsense point of view, you might think that talking to yourself silently is one of the most important—and certainly most common—forms of thought we enjoy. And yet, few contemporary philosophers or psychologists assign to inner speech an indispensable cognitive role. This is in itself ground for puzzlement: if we could get on more or less the same without talking to ourselves, why do we spend so much time in silent soliloquy?

Tuesday, 29 January 2019

Are Psychopaths Legally Insane?

This post is by Katrina Sifferd, Professor of Philosophy at Elmhurst College regarding her recent paper ‘Are Psychopaths Legally Insane?’ co-authored with Anneli Jefferson, Leverhulme Early Career Fellow in the Philosophy department at the University of Birmingham.



Exploring the nature of psychopathy has become an interdisciplinary project: psychologists and neuroscientists are working to understand whether psychopathy constitutes a mental disorder or illness, and if yes, of what sort; and moral philosophers and legal scholars are using theories of psychopathy to understand the mental capacities necessary for culpable action and whether psychopaths are morally and legally responsible.

In our recent paper, Anneli Jefferson and I argue that a diagnosis of psychopathy is generally irrelevant to a legal insanity plea. It isn’t clear that psychopathy constitutes a true mental disorder; but even if it is a disorder, tests for legal insanity require that specific mental deficiencies related to a disorder serve to excuse a defendant. Specifically, a successful insanity plea requires that deficits in moral understanding or control are present at the time the defendant commits the criminal act.

The two aspects of psychopathy most likely to impact responsibility are emotional deficits (or a lack of empathy), and problems with impulse control. Some have argued that flattened affect may deny psychopaths an understanding of the moral quality of their actions; and lack of impulse control can obviously result in harmful and illegal behavior. However, recent studies indicate that persons with higher scores on the PCL-R (the diagnostic tool typically used for psychopathy) are heterogeneous both with regard to empathy/affect and impulse control.

Antisocial behavior figures prominently in the PCL-R’s diagnostic criteria for psychopathy. This heavy reliance on antisocial behavior, we think, complicates our understanding of psychopathy because the relation between social deviance and the mental deficits traditionally associated with psychopathy, including problems with affect and impulsive behavior, has not been established. Although it seems clear that persons diagnosed as psychopaths using the PCL-R have a history of anti-social behavior, it is not clear that the diagnostic pinpoints mental deficits causally related to this behavior.