Thursday 28 December 2017

Head-to-Head public engagement workshop 2017

I am Tom Davies, a PhD student at the University of Birmingham, and my own research broadly concerns the metaphysics of mental causation. I assisted in the Head-to-Head public-engagement event this year.




Head-to-Head aimed to bring together philosophers and members of the public, for a series of accessible talks across three sessions. Primarily organised by Ema Sullivan-Bissett (University of Birmingham) and Alisa Mandrigin (University of Stirling), Head-to-Head was made possible by a British Academy grant, through the Early Career Mind Network.

The sessions followed a standard format; three talks in each, structured around a specific theme, before opening up to questions from the audience. The theme of the first session was “Making Sense of Our Senses”. Louise Richardson (York) kicked things off with an introduction to the contemporary scientific work on the senses, and the thought that this work might suggest there are more than the five “traditional” senses. Louise offered some reasons to be sceptical of this suggestion. This was followed by a talk from Alisa Mandrigin on interaction between senses, and a consequent challenge to the standard view of sensory perception. The first session was capped off by Laura Gow (Warwick), who introduced the audience to the debate around colour objectivity, before offering some novel thoughts on that debate. These talks were interesting and engaging for the audience of laypeople and philosophers alike.


Tuesday 26 December 2017

When is a Cognitive System Immune to Delusions?

Today's post is by Chenwei Nie who is a PhD student at the Department of Philosophy, University of Warwick. His research focuses on philosophical issues related to beliefs and delusions. You can read his previous work here.




Experiences and cognitive processes are two crucial elements in the formation and maintenance of delusions. Maher’s (1974) one-factor theory argues that delusions are reasonable responses to anomalous experiences. Motivated by the evidence that some people with anomalous experiences do not have delusions, the two-factor theory (e.g., Davies, Coltheart, Langdon, & Breen, 2001) argues that besides anomalous experiences, there is an impairment in the cognitive processes.

In my understanding, delusions arise not because of either anomalous experiences or impaired cognitive processes alone, but due to a mismatch between them so that the impaired cognitive processes are not able to account for the anomalous experiences in a normal way. Since a mismatch may also happen when either the experience is too anomalous for normal cognitive processes or the cognitive processes are too impaired for a normal experience, it invites us to wonder whether delusions may arise:

(1) when the anomalous experiences are so abnormal that even normal cognitive processes are not able to account for them;

(2) when the cognitive processes are so impaired that they cannot even account for normal experiences.

Here I shall argue for the idea that delusions may arise in scenario (1). Although it looks similar to Maher’s one-factor theory in the sense that they both agree that delusional people have a normal cognitive ability, they have two important differences. First, this idea relies on that our cognitive ability is limited. Second, the anomalous experiences here are something beyond a normal cognitive ability.

Thursday 21 December 2017

The European Network for the Philosophy of the Social Sciences (ENPOSS) 2017

This post is by Tomasz KwarciƄski, reporting from the sixth conference of the European Network for the Philosophy of the Social Sciences (ENPOSS) which took place from 20th-22nd of September at Cracow University of Economics (CUE). 





The event was co-organized by the Department of Philosophy CUE and the Copernicus Center for Interdisciplinary Studies, with support of the Polish Philosophy of Economics Network. Three days of the conference were divided between keynote lectures, parallel sessions, and books symposia (a novelty in the ENPOSS conferences). The conference was preceded by the Polish Philosophy of Economics Network’s symposium. Two of the keynote talks were recorded and can be watched by clicking on the links in this post.

The ENPOSS conference was officially inaugurated by Daniel Hausman’s speech (University of Wisconsin-Madison) “Social Scientific Naturalism Revisited”, during which he posed the fundamental question of whether the social sciences differ from the natural sciences in a way that the natural sciences do not differ from one another. He focused on economics and pointed out nine features which distinguish economics from natural sciences: a specific subject matter (human being), a character of an economic generalization (not universal), reflexivity (e.g. an object of economics can be influenced by what economists say about it), the fact that explanatory concepts of economics (preferences and believes) are subjective, intentional, and not directly observable, the importance of rationality in economics, and the possibility to influence economic findings by social norms and ideology.





The first parallel session on the first day addressed well-being issues, and problems of its measurement and interpersonal comparisons. The second session included issues such as production of scientific knowledge, relationships between social science and philosophy of action, and values in science. The first day of the conference was concluded by the book symposium devoted to Daniel Little’s (University of Michigan-Dearborn) “New Directions in the Philosophy of Social Science”. The author and panelists Gianluca Manzo (GEMASS & University of Paris-Sorbonne), and Federica Russo (University of Amsterdam) lead a discussion on the topics that Little addresses in the book.


Tuesday 19 December 2017

A Plea for Minimally Biased Naturalistic Philosophy

In this post, Andrea Polonioli (pictured below), MBA candidate at Strathclyde Business School, summarises his paper titled “A Plea for Minimally Biased Naturalistic Philosophy”,  forthcoming in Synthese.


My paper argues that there would be benefits for naturalistic philosophers if they expanded their methodological toolkit. The tools discussed here are the systematic methodologies for literature search and review that are widely employed in the natural, life and health sciences. 

More in detail, the paper presents and defends the following claims. First, naturalistic philosophers do not philosophise in a vacuum and, in fact, rely on literature search and review in a number of ways and for several purposes. A hot topic in metaphilosophy concerns how best to describe the methods used by philosophers and their practices. Many of the recent discussions on this topic have focused on whether, to what extent, and how analytic philosophy rests on the use of intuitions. Still, we should not underestimate the importance of literature search and review for the philosophical profession, at least in many areas of philosophical investigation. More precisely, if we asked what naturalistic philosophers actually do when they carry out philosophical research, a plausible answer could not help but mention their engagement with literature search and review as an important aspect of it.

Second, biases and cognitive limitations are likely to affect literature search and review in many critical ways. Over the past decades, psychologists have described numerous ways in which judgment formation and information search can be biased, and there are no reasons to doubt that also literature search and review should be biased in important ways, and even in the field of philosophy. For instance, Roy Baumeister and Leary (1997, 319) wrote that:

Although literature reviews are less subject than empirical investigations to capitalizing on chance, they are probably more susceptible to the danger of confirmation bias. Many good literature reviews involve seeing a theoretical pattern or principle in multiple spheres of behavior and evidence, and putting together such a paper undoubtedly involves an aggressive search for evidence that fits the hypothesized pattern. 

Thursday 14 December 2017

CfP: Philosophical Perspectives on Confabulation




Announcement: there will be a TOPOI Special Issue on Philosophical Perspectives on Confabulation. Here is the Call For Papers.


Introduction


Numerous psychological studies establish that we are unaware of information that is relevant to the occurrence of an event, but we may nonetheless offer a sincere, often inaccurate, explanation for that event. This phenomenon is named confabulation, or broad/everyday confabulation to distinguish it from those cases of confabulation that are due to impaired memory or that emerge in clinical contexts. We confabulate about what one might think are trivial matters such as consumer choices, but we are also prone to confabulating in situations which, arguably, implicate our identity, such as when we explain our political beliefs and moral convictions.

Confabulation raises a number of important philosophical questions. For instance, it is an open question how exactly we should characterize the phenomenon. Does a single characterization unify all instances of confabulation? Or do we need a family of related cognitions and behaviors to best account for the phenomenon? Whether clinical and non-clinical cases constitute distinct phenomena, or whether these instances are related, and how, is also up for discussion.

Early philosophical work on confabulation identifies it as a threat to first person authority and characterizes it in terms of its epistemic faults. If these accounts are right, then there are wide-ranging consequences for theorizing in the philosophy of mind and epistemology: we are routinely mistaken about the nature and origin of our sensations, preferences and judgements, and develop theories about our motivations that can be wildly inaccurate.

Given the ubiquity of the phenomenon, clarity on the nature and implications of confabulation are important for the project of understanding the mind. But further to this, a better understanding of the phenomenon will facilitate interventions with both psychiatric patients and in everyday cases in order to improve cognition.

Contributions from philosophers working in the philosophy of mind, philosophy of psychology, and epistemology are most welcome.

Tuesday 12 December 2017

Stranger than Fiction: Costs and Benefits of Confabulation

In this post I present the main ideas in my recent paper on confabulation, "Stranger than Fiction", which appeared in Review of Philosophy and Psychology in October, open access.




Confabulation has a bad press in philosophy, often identified with the main obstacle to attaining self-knowledge and described as an obvious instance of epistemic irrationality. In earlier work I thought about the current definitions of confabulation, which focus on the surface features of the phenomenon, and can be divided into two broad categories: those who define confabulations as false beliefs, and those who define confabulations as ill-grounded beliefs.

In this paper though, after a brief introduction, I leave aside how confabulation should be defined, and focus instead on its costs and benefits. In particular, I ask what costs and benefits it has for the acquisition, retention, and use of information that is relevant to us. Are we epistemically worse or better off when we confabulate?

Does confabulation really compromise self-knowledge? Does it really count as an instance of epistemic irrationality? I argue that confabulatory explanations of one's attitudes and choices do not threaten self-knowledge as correct mental-state self-attribution (that is, we know what our attitudes and choices are); but they are an instance of epistemic irrationality in the sense that we "tell more than we can know", as Nisbett and Wilson (1977) famously put it. For instance, we put forward an explanation of our choice when we lack sufficient evidence relevant to the causal process behind that particular choice. As a result, our explanation is ill-grounded and, on the basis of it, we may adopt further ill-grounded beliefs.

But that is only one side of the story. Confabulation has also a wealth of benefits for our epistemic agency that are often neglected if we just focus on truth and justification as the primary epistemic goods. Primarily, the benefits of confabulation are psychological. First, offering explanations about our attitudes and choices makes us feel more competent and enables us to build links and connections between the different things we may value and choose. A sense of competence and coherence will enhance our perceived agency, that is, the sense that we do not do things randomly or under the influence of uncontrollable environmental cues, but act in accordance to our values striving to attain the goals we set for ourselves.

Another psychological benefit of confabulation is that by offering an answer to a request for an explanation we exchange information with other people, and socialisation might be enhanced as a result. Socialisation contributes to both wellbeing and cognitive performance, but also allows us to receive feedback on our explanations and, in some circumstances, build some critical distance from them. Our explanations are likely to be false, as they are not based on the relevant information, but by being "out there", as an object of conversation and discussion, they may become a source of reflection and bring knowledge eventually, either about our attitudes and choices or about other things.

This does not mean that confabulation is all things considered good for us or should be encouraged. Rather it means that, when we take steps to reduce confabulation, and tell stories that are better grounded, we should also think about how our new and improved stories support our sense of agency, so that we don't throw the baby out with the bathwater.


Thursday 7 December 2017

Interview with John Sutton on Distributed Cognition

In this post Alex Miller Tate (AMT) interviews John Sutton (JS), pictured below, about his views on a number of research topics, many of which were explored at the Distributed Cognitive Ecologies of Collaborative Embodied Skill workshop.




AMT: Hello John, and thank you very much for agreeing to be interviewed for the Imperfect Cognitions blog! Let’s start with quite a general question: could you please clarify for some of our readers the different research areas that came together at your workshop?

JS: Sure! The workshop investigated the intersection of three broad research topics that have interested myself and others for some time. The first is the notion of Collaborative or Joint Action, the second is the Psychology and Philosophy of Skill, and the third is the Embodied and Distributed Cognition paradigm.

Lab studies of Joint Action have tended to focus on various kinds of synchrony amongst actors – such as situations where two people who have just met up will walk off ‘in step’ with each other, having previously been walking out of synch with each other, or where two or more people’s eye-gaze falls upon the same object relevant to the achievement of some collaborative task. On this topic, we have evidence both that joint motives enhance certain kinds of bodily synchrony, and that bodily synchrony promotes the achievement of jointly held goals.

But many cases of intuitively joint action – or, perhaps better, collaborative action – necessarily involve non-synchronous behaviour in the achievement of some jointly held goal. Examples include members of sports teams and bands. Individual actions, in these cases, ideally complement each other, so that the group may achieve some collective end, but they may not be at all alike – a bass guitar player’s movements will be nothing like a trombonist’s, even (perhaps especially) when they are collaborating to produce, say, some improvised jazz.

In such cases, it seems like individuals are exercising skills collaboratively, but non-synchronously. But the skill literature is generally quite individualistic. While we have much discussion of the relative merits of automatic vs cognitively controlled accounts of skill and expert action, we have little insight into how collaboration in the realm of skill works, or how the need to collaborate may affect the deployment or acquisition of individual skill, or even how skills may be rightly attributed to joint agents (plausible cases include phenomena such as swarm intelligence).

Finally, we were interested in investigating at this workshop how features of our natural and social environments may act as cues or supports to our collaborative activities, or to our acquisition or deployment of skills in collaborative contexts. This is where the notions of embodied and distributed cognition enter into the picture.

AMT: I imagine many of our readers might be thinking about the connection between the substantial literature on Collective Intentionality, and some of the topics you have discussed above. Do you think that they are interestingly connected?

JS: I suspect that paradigm cases of Collective Intentionality (CI), discussed primarily in the world of analytic philosophy, and paradigm cases of Joint Action (JA), may be aspects of the same phenomenon. They certainly share many features, and many cases of the latter intuitively depend on basic cases of the former; some sort of shared goal for instance. But they do have some obvious differences too. Not only do some paradigm cases of CI tend toward something more like synchrony (in belief, attention, and so on) than non-synchronous complementarity of behaviour and intention, they also tend to discuss phenomena that are introspectively, or at least interpersonally, accessible. That is, they refer to kinds of joint behaviour that are relatively articulable. The Joint Action literature often concerns itself with relatively small, complementary, and non-articulable adjustments to behaviour in response to that of another (though some cases probably allow for significantly more articulability than theorists like Hubert Dreyfus might want to allow).

Resolving the question of the kind and limit of connection between these topics is likely to require talking about the interaction between automaticity and explicit cognitive control in skill more generally, as this clearly has ramifications for degrees of articulability in different forms of active coordination. We don’t want to end up in a position where we ascribe too much explicit control to humans, where computationally expensive, representation-heavy, and biologically questionable models dominate our theorising. 

But neither do we wish to describe agents as if they were more plant than human. We should look for a continuous and complex ‘meshing’ of cognition and automaticity in Joint Action. In this respect, we can learn a lot from Distributed Cognition. This paradigm shows us how features of our social and natural environments can allow for the human brain to adopt relatively computationally frugal solutions to problems (including, it seems likely, problems of coordination in Joint Action). Moreover, we will learn a lot about these issues by examining cases where expertise and fluidity in Collaborative Action break down.





AMT: How does your understanding of Distributed Cognition relate to notions of The Extended Mind?

JS: I am not primarily concerned with defending strong metaphysical claims about The Extended Mind (though this is not to disagree with them, necessarily). There are a couple of reasons for this. The first is that much discussion of The Extended Mind, at least in its canonical formulation by Andy Clark and David Chalmers, tends to prioritise similarity between external and internal mental resources or functions, in order to argue that many such things are of a (cognitive) kind, metaphysically speaking. 

I am more interested in cases where external resources complement the internal neural resources, and where such external resources developed and were deliberately shaped together for us as agents to become better adapted for everyday cognition. Many such cognitive scaffolds bear no resemblance to internal cognitive resources, either structurally or functionally, but nevertheless play an important role in ordinary mental life. Focus on functional similarity risks ignoring certain indispensable contributions to everyday cognition (specifically, those that depend on or are transformed by the environment performing functions that the brain does or cannot perform). In this respect, Kim Sterelny’s account of the scaffolded mind is usefully compatible with my interpretation of Clark. 

The second reason is that while Extended Mind theorists tend to be more interested in cognitive ontology (where is the mind, what can count as a part of it, and so on), I am more interested in studying how human beings actually think, solve day-to-day problems, and control action. In doing this, I make the bet that many different kinds of environmental resources will be indispensable to our explanations. I further suspect, in Quinean fashion, that modelling such distributed cognition well, might give us our ontology ‘for free’.

AMT: One final question. Do you see any interesting points of convergence, or opportunities for collaboration, between those of us working on these non-traditional, explicitly social, projects in Cognitive Science, and those working on critical projects in Social and Political Philosophy?

JS: I think the answer to that is ‘yes’. Cognitive Science that is sensitive to the social context of cognition cannot be blind to issues of social and political injustice for long, as it would mean avoiding important context that may be affecting everyday human cognition. Moreover, it would be irresponsible not to take stock of the moral, social, and political consequences of de-centering individual brains in Cognitive Science. 

In many ways, these sorts of projects involve the re-negotiation of scientific and philosophical boundaries, and these boundaries can often be as much ideological as they are academic and theoretical. James Williams has a critique of Andy Clark’s Extended Mind model that takes it up on exactly this kind of theoretical blindspot - a failure to recognise the potential harm of uninterrogated social and political assumptions built into the theory. He favourably contrasts Sterelny’s work in this respect. 

My view is not only that the thesis of Extended Mind or Distributed Cognition, when rightly interpreted, does have the resources to address these issues effectively. I think, further, that its focus on the incompleteness of our individual cognitive capacities, and on the ways we are intrinsically interdependent with artefacts and with other people in complex cognitive ecologies, actually offers more promising insights into our psychological vulnerability and resilience than any other approach in the philosophy of mind and cognition.

More generally, I think adopting a critical edge will help the development of both a sound and conscionable Cognitive Science, and progressive work in Social and Political Philosophy clearly has something to add here. Going further, there are similar moral as well as academic benefits to be had from interdisciplinarity, especially between Cognitive Science and the Humanities/Social Sciences. They offer us both a critical edge and very rich case studies. Better yet, these disciplines are fascinated by Distributed Cognition, and we have much to offer them theoretically, as well as vice versa.

Tuesday 5 December 2017

Is Autism a Disease?

This post is by Christopher Mole, Chair of the programme in Cognitive Systems at the University of British Columbia. He is the author of Attention is Cognitive Unison (OUP, 2010), and The Unexplained Intellect (Routledge, 2016). This post outlines the argument of his recent article, “Autism and ‘disease’: The semantics of an ill-posed question” (Philosophical Psychology, 8(3): 557-571).



Discussions of autism are often euphemistic: We speak of ‘service users’ rather than patients; and ‘atypicality’ rather than illness. By avoiding the rhetoric of disease we avoid the implication that the autistic point of view is a defective one, which would be gone from a world in which everything was operating correctly.

Those who do use the vocabulary of disease might reject such motivations, while congratulating themselves on their straight-talking, no-nonsense approach. This would, I think, be a mistake. According to one tradition, the mistake would be that of applying a ‘medical model’. Autism, on this view, is something other than a disease.

This too is an unappealing position. Autism has several effects, some disrupting the gastrointestinal system, others disrupting the processes of immune response and inflammation. It seems arbitrary to deny that those consequences that affect psychological functioning might also be understood medically. And to deny this would leave us without a full account of the autistic person’s entitlement to help.

Autistic people can seem inconsiderate. They are, as a result, prone to suffer from loneliness, unless allowances are made. Such suffering can be profound. It is appropriate that these allowances be made (and appropriate that healthcare budgets provide funding for them). Autism therefore differs from such non-medical conditions as the condition of being an arsehole. That condition is also prone to produce the suffering of loneliness, but — not being a disease — there is no reason why healthcare resources should be directed to its mitigation.

On these grounds (and others) we find ourselves wanting to avoid saying that autism is a disease, and also wanting to avoid saying that it is not one. We might try to have it both ways, by saying that the question is vague, or that the answer varies from case to case. I claim that we should instead reject both answers.

Michael Dummett’s theory of pejoratives opens up the logical space for this. Pejoratives (such as racist terms for ethnic groups) should be rejected whether their use is affirmative or negative. Such terms should even be rejected in contexts that are non-assertoric, as in the asking of questions.

Similarly, I claim, we should reject the vocabulary of disease in connection with autism, not because we should deny that autism is a disease, but because we should refuse even to ask the question.

It is a strength of Dummett’s theory that it applies to vocabulary whether or not that vocabulary is insulting, and so explains why vocabulary that applauds piety, machismo, or class loyalty, is no better than vocabulary that deplores racial diversity, effeminacy, or free-thinking. The problem with pejorative vocabulary is not the insult. The problem is that such vocabulary allows normative consequences to be inferred from the wrong descriptive basis.

The vocabulary of disease enables us to infer certain normative consequences on the basis of there being a condition that impairs human flourishing: from the presence of such a condition it allows us to infer that cure-seeking would be appropriate (and perhaps obligatory for those with a duty of care); and it allows us to infer that shortcomings attributable to this condition are mitigated.

Thursday 30 November 2017

Understanding Ignorance

In this post, Professor and Chair of Philosophy at Gettysburg College, Daniel DeNicola, introduces his just-released book, Understanding Ignorance: The Surprising Impact of What We Do Not Know (MIT, August 2017). He writes on a range of ethical and epistemic issues, usually related to education. His new book grew from an earlier work, Learning to Flourish: A Philosophical Exploration of Liberal Education (Continuum/Bloomsbury, 2012).



Ignorance, it seems, is trending. Political ignorance has become some so severe that the democratic ideal of an informed citizenry seems quaint. Willful ignorance is the social diagnosis of the moment: critics found to be implicated in prejudice, privilege, ideology, and information cocoons. Ignorance is used both as accusation and excuse. In the broadest sense, it is a ineluctable feature of the human condition.

And yet, philosophers have ignored ignorance. While occupied with the sources and structure of knowledge, epistemologists for centuries have dismissed ignorance as simply the negation of the proposition, “S knows that p.”

Within the last two decades, however, scholarship on various aspects of ignorance has popped up in several disciplines. My book, Understanding Ignorance, draws on these multi-disciplinary works and presents what is likely the first comprehensive, philosophical treatment of ignorance—comprehensive, in that it addresses conceptual, epistemological, ethical, and social dimensions.

My explication is organized by four spatial metaphors: ignorance as a place or state, as boundary, as limit, and as horizon. Among the topics discussed are the relation of ignorance and innocence, the technique of mapping our ignorance, and our intellectual tools for ignorance management. I also offer a critique of “the virtues of ignorance” as proposed by various writers.

I conclude that ignorance has significant philosophical import and a structure perhaps more complex than that of knowledge. (After all, if genuine knowledge requires, say, four conditions, then the failure to meet any one or any combination describes a different form of ignorance.) I argue that we have many ways of constructing our own (and others’) ignorance, both deliberately and inadvertently—a few of which are morally permissible, even obligatory.





Although Understanding Ignorance is intended to be accessible to a non-specialist readership, it builds a critique targeted against traditional analytic epistemology: it has been focused on propositional knowledge and the “context of justification,” ignoring other forms of knowing and the “context of discovery”; it has concentrated on the individual knower in solo acts of cognition, ignoring the dynamics of epistemic communities; and has been disinterested in many values issues that arise from the acquisition, content, purpose, and context of knowing and not-knowing.


In reaction, I embrace virtue epistemology, along with insights from social and feminist epistemology—with a richer treatment of ignorance. Thus, I advocate a re-centering of the field on the interaction of knowledge and ignorance.

Tuesday 28 November 2017

Structure-to-Function Mappings in the Cognitive Sciences




Muhammad Ali Khalidi is Professor of Philosophy and Chair of the Department of Philosophy at York University in Toronto. He specializes in general issues in the philosophy of science (especially, natural kinds and reductionism) and philosophy of cognitive science (especially, innateness, concepts, and domain specificity). His book, Natural Categories and Human Kinds, was published by Cambridge University Press in 2013, and he has recently been working on cognitive and social ontology.

If a sudden interest in taxonomy is indicative of a crisis in a scientific field, then the cognitive sciences may be in a current state of crisis. Psychologists, neuroscientists, and researchers in related disciplines have recently devoted increasing attention to the ways in which their respective disciplines classify and categorize their objects of study. Many of these researchers consider themselves--rightly in my opinion--engaged in the effort to uncover our “cognitive ontology”.

Ever since the nineteenth century, naturalist philosophers like Whewell, Mill, and Venn have regarded scientific taxonomy as a guide to the “real kinds” that exist in nature. The categories that play an important role in our theorizing, by explaining and predicting phenomena, are the ones that will tend to uncover ontological divisions in nature. Contemporary naturalists, like Richard Boyd, agree with this inference from taxonomy to ontology, holding that “successful induction and explanation always require that we accommodate our categories to the causal structure of the world” (1991, 139).


Thursday 23 November 2017

The Meaning of Belief

This post is by Tim Crane.

I am Professor of Philosophy at the Central European University (CEU) in Budapest. I was Knightbridge Professor of Philosophy at the University of Cambridge and he taught at UCL for almost twenty years. I founded the Institute of Philosophy in the University of London, and I am the philosophy editor of the TLS

I have written five books on the nature of the mind, which is my principal area of interest in philosophy. But I also have a long-standing interest in the nature of religion and religious belief, and The Meaning of Belief is my first serious attempt to write on this subject.



The Meaning of Belief attempts to give a description of the phenomenon of religion from an atheist’s point of view — that is, on the assumption that there is no god, supernatural or transcendent reality or being. The book’s aim is not to argue for this atheism, but to give a description of religious belief which makes sense to believers themselves.

In this respect the book differs from many recent atheist books on religion, which aim to show that most of what counts as religious belief is both false and irrational. These books — by the self-styled ‘New Atheists’ — tend to emphasise the cosmological element of religious belief, treating belief as a kind of primitive science or as a rival to science. The Meaning of Belief challenges this view of religion — cosmological claims, claims about the universe as a whole, are important to most religions, but they are not scientific claims.

Tuesday 21 November 2017

Does Hallucinating Involve Perceiving?


My name is Rami el Ali and I am an assistant professor at the Lebanese American University. I work in philosophy of mind, but also have research interests in Phenomenology and the Philosophy of Technology. Currently my focus is on the nature of misperception, and in particular hallucinations.




In my paper 'Does Hallucinating Involve Perceiving?', I argue for the tenability of a common-factor relationalist (alternatively, naive realist) view of perceptual experience. I do this by arguing that a view on which hallucinating involves perceiving can accommodate three central observations thought to recommend the widely accepted nonperceptual view of hallucinations, on which hallucinations do not involve perception.

Philosophers usually agree, even when they do not accept the view, that relationalism provides the simplest characterization of perception. Correspondingly, the simplest view of experience merely extends the account of perception to illusions and hallucinations. The resultant view is that all experience involves sensory awareness of the mind-independent surroundings, where those surroundings appear some way to the subject.

This view of experience is quickly dismissed because hallucinations are thought to be nonperceptual. In place, we have views like disjunctive relationalism and common-factor representationalism that seek to accommodate nonperceptual hallucinations. But whether we should resort to these views at least partly depends on whether we must accept the nonperceptual view of hallucinations. Upon closer inspection, three observations that seem to favor the view fail to recommend it over a perceptual alternative.

The first two observations focus on discrepancies between hallucinatory appearances and the surroundings the hallucinator is related to. The gist of my response to these is that no discrepancy between what the subject is aware of and how things appear to her establishes that the subject is not aware of her surroundings. More specifically, the first observation focuses on the 'inappropriateness' of the objects of awareness to the hallucinatory appearance. This depends on specifying an acceptable standard of appropriateness. 

Thursday 16 November 2017

Only Imagine. Fiction, Interpretation and Imagination

Kathleen Stock is a Philosopher at the University of Sussex, working on questions about imagination and fiction, including: What is the imagination? What is the relation between imagining and believing? What is fiction? Can we learn from fiction? Are there limits to what we can imagine? She has published widely on related topics, and her book Only Imagine: Fiction, Interpretation and Imagination is now out with Oxford University Press. She blogs about fiction and imagination at thinkingaboutfiction.me.





Philosophers and literary theorists argue about three things: what fiction is, how fiction should be interpreted, and what imagination is. In Only Imagine, I suggest that all three questions can be illuminated simultaneously.  I aim to build a theory of fiction that also tells us about the imagination, and vice versa.

My focus is on texts. First, I defend a theory of fictional interpretation (or ‘fictional truth’ as it’s sometimes called). When we read a novel or story, we understand certain things as part of the plot: ‘truths’ about characters, places, and events (though of course these are usually not actually true, but made up). A lot of the time, these ‘truths’ are made explicit – directly referred to by the words used by the author. But equally, in many cases, plot elements are only implied, not referred to explicitly. By what principle does or should the reader work out what such elements are, for a given story? Whether explicit or implied, I argue that fictional truths are to be discerned by working out what the author of the story intended the reader to imagine.

Tuesday 14 November 2017

Philosophy of Psychedelic Ego Dissolution: Unbinding the Self

This post is by Chris Letheby.




In recent decades there has been a growing interdisciplinary attempt to understand self-awareness by integrating empirical results from neuroscience and psychiatry with philosophical theorizing. This is exemplified by the enterprise known as ‘philosophical psychopathology’, in which observations about unusual cognitive conditions are used to infer conclusions about the functioning of the healthy mind. But this line of research has been somewhat limited by the fact that pathological alterations to self-awareness are unpredictable and can only be studied retrospectively—until now.

The recent resurgence of scientific interest in ‘classic’, serotonergic psychedelic drugs such as LSD and psilocybin has changed all this. Using more rigorous methods than some of their forebears, psychiatrists have shown that psychedelics can, after all, be given safely in clinical contexts, and may even cause lasting psychological benefits. Small studies have shown symptom reductions in anxiety, depression, and addiction, and positive personality change in healthy subjects, lasting many months, after just one or two supervised psychedelic sessions.

What’s most intriguing is that the mechanism of action appears to involve a dramatically altered state of consciousness known as ‘ego dissolution’, in which the ordinary sense of self is profoundly altered or even absent. In many studies, ratings of ‘mystical experience’—of which ego dissolution is a core component—strongly predict clinical outcomes; so understanding ego dissolution is crucial for understanding the therapeutic potential of psychedelics.




Moreover, with the advent of modern neuroimaging technologies, there is a chance for psychedelics finally to fulfil their promise to be for psychiatry “what the microscope is for biology… or the telescope is for astronomy” (Grof 1980). The renaissance of psychedelic research allows neuroscientists, for the first time, to watch the sense of self disintegrate and reintegrate, safely, reliably, and repeatedly, in the neuroimaging scanner.

In a paper recently published in Neuroscience of Consciousness, Philip Gerrans and I have proposed a novel account of self-awareness based on findings from psychedelic science. Research to date has found that ego dissolution is associated with global increases in connectivity between normally segregated brain regions, resulting from a breakdown of high-level cortical networks implicated in the sense of self. However, results have not been entirely consistent; in some studies, breakdowns are most pronounced in the famous Default Mode Network (DMN), centred on the medial prefrontal and posterior cingulate cortices, whereas in others they are found in the Salience Network (SLN), centred on the anterior cingulate and anterior insular cortices.

Philip and I argue that this pattern of results can be explained by an account combining insights from two different theories: the influential predictive processing theory of brain function, and the self-binding theory of Sui and Humphreys (2015). Predictive processing holds that the brain is a prediction engine, constantly building models to anticipate its future inputs and reduce error. Meanwhile, self-binding theory says that a key function of self-representation is to integrate information from disparate sources into coherent representations. This claim is based on a body of experimental work showing that self-related information is integrated more efficiently than non-self-related information.

Thursday 9 November 2017

The Copenhagen 2017 School in Phenomenology and Philosophy of Mind




The Copenhagen Summer School in Phenomenology and Philosophy of Mind is an annual event organized by the Center of Subjectivity Research . It aims to provide essential insights into central themes within the philosophy of mind, viewed from a phenomenological perspective. The general topics covered this year were intentionality, experience, reflection, perception, attention, self-awareness, rationality, normativity and methodology.

Over a period of 5 days, the schedule included keynote lectures, PhD presentations, discussion groups and seminars. The late afternoons and evenings were dedicated to different social events (such as visits to the city, a harbour tour) which allowed for opportunities to exchange ideas amongst researchers. In this post, I give a detailed summary of the main points made by the keynote speakers.




On the first day SĂžren Overgaard talked about Embodiment and Social Perception. The question he set up to answer was whether Social Perception Theory depends on a particular view of embodiment. Social Perception Theory (SPT) claims that it is possible to perceive that others are in happy, angry, in pain, desire another piece of pie, or intending to attack.

The idea of embodiment that Overgaard argued for is that in which at least some mental states extend all the way to the perceptible surface behaviour. In his view, a joyous smile on someone’s face is part of the mental state of joy in so far as it may carry information about the emotion. According to an intuitively plausible view that he labels the ‘Dependency Thesis’, SPT depends in specific ways on Embodiment. But he considers that in the context of the mindreading debate the Dependency Thesis is false.







On the second day Hanne Jacobs gave a talk on Attention, Reason and Subjectivity. According to Jacobs, Husserl’s account is worth considering when trying to understand what we do when we make up our own (embodied, personal, and socially embedded) minds. Based on Husserl’s phenomenology, she proposed that attention is a mode of consciousness in which we exercise reason.

She argued against contemporary authors who defend the idea that Husserl’s phenomenology proposes that there is a non-discursive form of rationality present in pre-predicative perception. She proposed instead that Husserl ties the activity of reason to the capacity for reflection in at least one significant sense. That in which, when we are attentive to something and take something to be in certain way and not other, we are also pre-reflectively aware of the reasons that there are are for and against our taking something to be in such specific way.

This sort of pre-reflective awareness gives way to reflective deliberation. And that it is for this that we do not just need a theory of self-knowledge but a theory of the subject or subjectivity to understand the nature and scope of the exercise of rationality. Jacobs argues that Husserl presents us with both a theory of self-knowledge and with an account of the subject that exercises rationality.


Tuesday 7 November 2017

Understanding Autism

This post is by Dan Weiskopf. He is an Associate Professor of Philosophy at Georgia State University, and his research deals with classificatory practices in scientific taxonomy and everyday cognition.



Autism is among the most mystifying of psychiatric disorders. For patients and their families, doctors, and caregivers, it presents an intractable and often painful clinical reality. For researchers, it presents a profound theoretical challenge. While it has a handful of fairly well agreed-upon characteristics (the so-called “core triad”), it is also linked with an enormous range of inconsistent and heterogeneous symptoms. These include behavioral, cognitive, neurobiological, and genetic abnormalities, as well as somatic medical conditions.

Given this messiness, it is hard to say what autism itself even is, let alone design effective interventions and treatments for it. There has been a call by some—psychologist Lynn Waterhouse most prominently—to eliminate the disorder from our nosology, on the grounds that it is too disunified to count as a single condition.

Against this eliminativist position, I argue that the prospects for understanding autism are brighter if we adjust our expectations of what psychiatric disorders look like. In “An Ideal Disorder? Autism as a Psychiatric Kind”, I propose that the complexities of autism can best be explained using a network model.

Think of a disorder as initially “anchored” by a set of focal exemplars. These exemplars represent cleaned-up and idealized sets of clinical cases in which the disorder appears. They constitute the nodes of the network that represents the disorder. In the case of autism, there might be different idealized cases standing for “high” and “low” functioning individuals, but many more distinctions might be necessary depending on how the disorder presents itself in different settings and patient populations. For example, we now recognize the important fact that the profile of autism appears to be quite different in men than in women.

For each exemplar there is a set of a characteristic set of cognitive, somatic, neural, and genetic markers. These correspond to places where things have gone wrong, at many levels, to produce that particular clinical phenotype. The existence of these underlying explanatory clusters is what warrants treating focal exemplars as real sites for deeper investigation and treatment.

This captures autism’s heterogeneity. Still, why think there is one disorder here rather than many? The answer is that we can trace out commonalities in the patterns of disruption underlying these exemplars. Exemplars are chained together into a network by having these properties in common. One patient may share a certain rigid behavioral repertoire and GI ailments with a second one, and that patient may share a form of abnormal language development with a third. The third, in turn, might have a specific neuroanatomical abnormality that is shared with the first, but not the second.


Thursday 2 November 2017

Call for Papers: Confabulation and Epistemic Innocence

Elisabetta Lalumera is organising a Confabulation and Epistemic Innocence workshop at the University of Milano-Bicocca (image below), to be held in Milan (Italy) on May 28, 2018.



Below you find a call for papers for the event.


Summary of topic

When people are unaware of information that accounts for some phenomenon, this does not necessarily prevent them from offering a sincere, but often inaccurate, explanation. Indeed, whilst confabulation has been shown to occur alongside psychiatric diagnoses featuring serious memory impairments, and in people undergoing symptoms of mental distress, it also occurs regularly in people with no such diagnoses or symptoms.

Some cognitions which fail to accurately represent reality may nonetheless have redeeming features that promote good functioning in a variety of domains. Inaccurate cognitions may misrepresent the world, but can also bring psychological and practical benefits. More recently, philosophers have pointed out that epistemically costly cognitions can also sometimes have positive epistemic features.

When these epistemic benefits are significant, and could not be attained in other ways, then the cognition can be considered “epistemically innocent”. The notion of epistemic innocence has already been discussed in the case of some inaccurate cognitions, such as delusions, but whether or not confabulation counts as epistemically innocent is a relatively underexplored issue.

Sample questions

  • What are the epistemic costs of confabulation and are there any epistemic benefits? 
  • How should epistemic costs and potential benefits of confabulation be adjudicated? 
  • Are the potential epistemic benefits of confabulation related to, or independent of, any psychological benefits? 
  • Is providing an inaccurate answer or an inaccurate explanation better (and in what sense) than providing no answer or no explanation at all? 
  • Are the costs and benefits of clinical confabulation comparable to those of non-clinical confabulation?


Instructions, review process, and timeline

Please submit a 500-word paper by 28th February 2018. The short paper is supposed to sketch the main argument and include some reference to its conclusions and implications. Contributors will be informed by 30th March 2018 whether their short papers have been selected for presentation at the workshop.

For the authors whose short papers are selected for presentation, there may also be an opportunity to submit a full paper (max. 8000 words) by 30th June 2018 for inclusion in a special issue of a journal on the philosophy of confabulation.

Please submit your 500-word papers to Valeria Motta as an email attachment. The title should be CONFABULATION 2018 [SURNAME OF AUTHOR]. Identifying information such as the full name of the authors, their email addresses, and their affiliations should not appear in the attachment, but only in the body of the email. The attachment should contain a blind version of the short paper. Authors’ names, affiliation if applicable, and contact details will be accessed only by Valeria Motta who will assign reviewers, but will not be involved in reviewing submissions. Each paper will be independently reviewed by two experts. Contributions from members of groups underrepresented in philosophy are especially welcome.

Please note that reasonable travel and subsistence expenses of the selected workshop contributors will be covered thanks to the support of the University of Milano-Bicocca and the sponsorship of project PERFECT. Also note that presentation of the selected papers at the workshop does not guarantee that the full papers will be accepted for publication in a special issue dedicated to the issue of confabulation, as the full papers will be subject to further, independent review and may be rejected at that stage.

You may contact Elisabetta Lalumera for general inquiries about the workshop or the CFP.

Tuesday 31 October 2017

Quotidian Confabulations

In this post, Chris Weigel discusses her paper “Quotidian Confabulations: An Ethical Quandary Concerning Flashbulb Memories,” published in Theoretical and Applied Ethics in 2014. Chris is a professor of philosophy at Utah Valley University. She works mainly on experimental philosophy of free will and on cognitive biases.



How did you find out about the planes crashing on September 11, 2001? What do you remember about the first time you met your spouse? Wait, don’t answer those questions! Your memories about those events are flashbulb memories—memories of surprising, monumental, and emotionally-laden events—and my paper invites us to rethink asking people for their memories about these events, such as the Challenger explosion, assassinations of important public figures, and terrorist attacks.

My conclusion isn’t that we should never ask people about their flashbulb memories, but rather that sometimes asking people about their flashbulb memories is problematic. It’s problematic because flashbulb memories often involve confabulations (i.e., believed, obvious falsehoods), and under certain circumstances, we should abstain from provoking a confabulation.

My conclusion is counterintuitive. It’s counterintuitive to say that in certain cases people should not ask others about flashbulb events. To get to that conclusion, I begin by looking at Anton’s syndrome and Capgras syndrome, two syndromes that involve confabulations. People with Anton’s syndrome think they can see even though they are blind. If you ask someone with Anton’s syndrome about what you look like, they will likely answer you even if you have never met the person before.

People with Capgras syndrome believe that their loved ones are impostors. A person with Capgras syndrome might tell you that their father isn’t really their father despite all evidence to the contrary. If you ask that person how the so-called impostor came to have the right wallet, appearance, demeanor, and memories, the person with Capgras syndrome might begin by telling you a story about how the wallet must have been stolen. People with these syndromes confabulate. That is, they assert falsehoods that they confidently believe, even though others can see that the falsehoods are obviously false and baseless. Also, these particular confabulations can be reliably provoked in certain circumstances: ask an Anton’s patient what they see or ask a Capgras patient about the specific loved ones they believe are impostors, and you’ll most likely get a confabulation.

The next step in the argument is to see that absent competing obligations, it is wrong to provoke a confabulation. Competing obligations include such things as medical research, neuroscientific research, caring for the patient, and so on. A doctor who is talking trying to assess a blind person with Anton’s syndrome might ask that person what they see in the course of a medical evaluation. The importance of accurate medical evaluations entails that provoking a confabulation in such circumstances is not problematic. On the other end of the spectrum, suppose someone sold tickets at a fair to gawkers who wanted to see the confabulating blind person who thinks they can see. This ticket seller’s motives are negative and cruel, and it is fairly easy to see that provoking confabulations in such a case is problematic.

In between are cases where someone provokes a confabulation with no competing obligations (either positive as in the case of the doctor or negative as in the case of the ticket seller). For example, someone might provoke a confabulation out of idle curiosity, to fill a silence with sound, or for no reason at all. In these cases, since there are no overriding obligations like research or diagnosis, provoking a person with Anton’s syndrome or Capgras syndrome to confabulate is problematic.

Thursday 26 October 2017

True Enough

Catherine Z. Elgin is Professor of the Philosophy of Education at Harvard Graduate School of Education. She is the author of Considered Judgment, Between the Absolute and the Arbitrary, With Reference to Reference, and (with Nelson Goodman) Reconceptions in Philosophy and Other Arts and Sciences. In this post, she talks about her book True Enough.




Epistemology valorizes truth.  There may be practical or prudential reasons to accept a contention that is known to be false, but it is widely assumed that there can never be epistemically good reasons to do so.  Nor can there be epistemically good reasons to accept modes of justification that are not truth-conducive.  Although this seems plausible, it has a fatal defect.  It cannot accommodate the cognitive contributions of science.  For science unabashedly uses models, idealizations, and thought experiments that are known not to be true.  Nor do practicing scientists think that such devices will ultimately be eliminated.  They expect current models to be supplanted by better models, but not by the unvarnished truth.  Modeling, idealizing, and thought experimenting are considered valuable tools, not unfortunate concessions to human frailty.

Such devices are, I contend, epistemically felicitous falsehoods. They are not mere heuristics. They are central components of the understanding that science supplies. If it is to accommodate science then, epistemology must relax its allegiance to truth.  True Enough develops a holistic epistemology that does so. It acknowledges that tenable theories must be tethered to the phenomena they concern, but denies that truth is the sole acceptable tether.  Felicitous falsehoods figure in understanding by exemplifying features they share with their targets.  They highlight those features and display their significance.  A model like the ideal gas, although strictly true of nothing, highlights important features of actual gases, while sidelining confounding features that for the purposes of a given inquiry make no difference.

On this picture, the center of epistemological gravity shifts from knowledge to understanding.  In science and other systematic inquiries, relatively comprehensive bodies of epistemic commitments -- some true, some felicitously false, some not even truth-apt -- stand or fall together.  Rather than accepting the kinetic theory of gases because she already accepts each of its component commitments, an agent accepts those commitments because they are part of a theory that is, on the whole, acceptable.