Tuesday 31 January 2017

On the Special Insult of Refusing Testimony


In this post Allan Hazlett (pictured above), Professor of Philosophy at the University of New Mexico, summarises his paper "On the Special Insult of Refusing Testimony", which is forthcoming in a special issue of Philosophical Explorations on false but useful beliefs. The special issue is guest edited by Lisa Bortolotti and Ema Sullivan-Bissett and is inspired by project PERFECT's interests in belief.

My paper is inspired by two remarks made by J.L. Austin and G.E.M. Anscombe. In “Other Minds” (1946), Austin writes that “[i]f I have said I know or I promise, you insult me in a special way by refusing to accept it,” and, in “What Is It to Believe Someone?” (1979), Anscombe writes that that “[i]t is an insult … not to be believed.” The goal of my paper is to give an account of why you can insult someone by refusing her testimony.

I take my paradigm case of the special insult of refusing testimony from David Foster Wallace’s (2004) story entitled “Oblivion.” In the story, the narrator Randall Napier describes a “strange and absurdly frustrating marital conflict between Hope [i.e. his wife] and myself over the issue of my so-called ‘snoring’” as arising from Hope’s repeated insistence that he snores. Randall, however, refuses to accept Hope’s testimony on this point, because, as he insists, “I, in reality, am not yet truly even asleep at the times my wife cries out suddenly now about my ‘snoring’ and disturbing her.” Given some other details of the story, I maintain that, in refusing Hope’s testimony, Randall insults her.

How is this possible? I characterize insults, in general, as expressions or manifestations of offensive attitudes: if you insult someone, you express or manifest an attitude that is offensive to her. In the case of refusing testimony, I argue, the relevant attitude manifested is doubt about the speaker’s credibility. But this attitude is offensive to the speaker, when it is, only in virtue of the fact that the speaker presupposed that they were credible by offering their testimony in the first place. Refusing someone’s testimony is thus an instance of rejecting a person’s invitation to engage in a collective activity on the basis of doubt about their competence to engage in that activity.

Thursday 26 January 2017

Tense Bees and Shell-Shocked Crabs

Today's post is by Michael Tye on his book Tense Bees and Shell-Shocked Crabs: Are animals conscious?.

I’m a philosopher at the University of Texas at Austin. I encountered philosophy at Oxford and I’ve taught at Temple University, London University, and the University of St Andrews as well as at Texas. I’ve published widely on consciousness and I am associated with a view that has come to be known as representationalism.  The book described below, published by OUP in November 2016, remains neutral on the question as to the right view on the nature of consciousness.





Do birds have feelings? What about fish – can fish feel pain? Do insects have experiences? Can a honeybee feel anxious? If trees aren't conscious but fish are, what's the objective difference that makes a difference? How do we decide which living creatures have experiences and which are zombies? Can there be a conscious robot? This book advances philosophically rigorous, empirically informed answers to these questions. To do this, an epistemological framework suitable for tackling such issues is developed and then, in light of recent empirical research, applied broadly. In particular, it is argued that is rational to prefer the hypothesis that consciousness extends a considerable way down the phylogenetic scale––farther than many would expect. This result has both theoretical and practical implications. The chapters are organized as follows.




Chapter 1 discusses experience and its limits. How can one know whether an animal is having an experience? It is suggested that what is needed to answer this question is not a principle spelling out what experience is in objective terms (either via an a priori definition or an a posteriori theory) but an evidential principle on the basis of which one can justifiably attribute consciousness to animals.


Chapter 2 addresses the question of the relationship between experience and consciousness. Chapter 3 takes up the radically conservative view, held by some major historical figures, that only humans can have experiences. This view is a mistake. The mistake has its origins in religious conviction, mind-body dualism, and an alleged connection between thought and language. This conservatism survives in certain contemporary views that hold that experience is thought-like or conceptual.

Tuesday 24 January 2017

Epistemically Useful False Beliefs



Duncan Pritchard (pictured above) is Professor of Philosophy at the University of Edinburgh and Director of the Eidyn research centre. In this post, he summarises his paper on epistemically useful false beliefs, which is forthcoming in a special issue of Philosophical Explorations on false but useful beliefs. The special issue is guest edited by Lisa Bortolotti and Ema Sullivan-Bissett and is inspired by project PERFECT's interests in belief.

It seems relatively uncontroversial that false beliefs can often be useful. For example, if one’s life depends on being able to jump that ravine, then it may be practically useful to have a false belief about how far one can jump. In particular, that one overestimates one’s jumping ability may well give one the confidence needed to wholeheartedly attempt the jump. Having an accurate conception of one’s abilities in this regard, in contrast, might lead one to falter, thereby consigning oneself to certain death (rather than possible escape). Moreover, notice that this utility needn’t be a one-off, in that one could imagine cases where it is systematically advantageous to have certain false beliefs (perhaps one occupies an environment where overestimating one’s abilities is regularly conducive to one’s survival).

The question that concerns me, however, is whether there is a philosophical significant class of false beliefs that are specifically epistemically useful. The reason why this is an interesting question is that it is part of the very nature of the epistemic that it is concerned with the promotion of truth and the avoidance of error. With that in mind, how could a false belief be epistemically useful?

We need to refine our question a little here, which is why I am focussing on whether there is a philosophically significant class of false beliefs that are epistemically useful. The reason for this is that there are clearly some uncontroversial cases where false beliefs are epistemically useful. In making a calculation, for example, having a false belief might help one to gain a correct result because it cancels out a previous error. What would be philosophically interesting, however, and potentially in tension with our traditional view of the nature of the epistemic, would be whether this epistemic utility could be sustained over the long-term. In particular, what we are interested in is whether false belief can ever be systematically epistemically useful.

In the paper I approach this question in a piecemeal fashion by considering a selection of cases which might look like plausible examples of false beliefs that are systematically epistemically useful. The first concerns the kinds of strictly false claims that are sometimes employed in scientific reasoning, such as appeals to idealisations (like the ideal gas law). I argue that when we look at these cases more closely, however, it isn’t credible that having a false belief specifically (as opposed to, say, accepting a false proposition) is systematically generating epistemic value.


Thursday 19 January 2017

Metaepistemology and Relativism



Today's post is by J. Adam Carter, lecturer in Philosophy, University of Glasgow. In this post, he introduces his new book Metaepistemology and Relativism.

The question of whether knowledge and other epistemic standings like justification are (in some interesting way) relative, is one that gets strikingly different kinds of answers, depending on who you ask. In humanities departments outside philosophy, the idea of ‘absolute’ or ‘objective’ knowledge is widely taken to be, as Richard Rorty (e.g., 1980) had thought, a naïve fiction—one that a suitable appreciation of cultural diversity and historical and other contingencies should lead us to disavow. A similar kind of disdain for talk of knowledge as objective has been voiced—albeit for different reasons—by philosophers working in the sociology of scientific knowledge (e.g., Barry Barnes, David Bloor, Steven Shapin).

Adam Carter

And yet, within contemporary mainstream epistemology—roughly, the branch of philosophy concerned with the nature and scope of human knowledge—the prevailing consensus is a strikingly different one. The term ‘epistemic relativism’ and views such as Rorty’s that have been associated this title have been, if not dismissed explicitly as fundamentally unworkable (e.g., Boghossian 2006, Ch. 6), simply brushed aside by contemporary epistemologists, who proceed in their first-order projects as if arguments for epistemic relativism can be simply bracketed, and as if the kind of answers to first-order epistemological questions they struggle with have objective answers.

In Metaepistemology and Relativism (Palgrave MacMillan, 2016) I set out to question whether the kind of anti-relativistic background that underlies typical projects in mainstream epistemology can on closer inspection be vindicated.

In the first half of the book—after some initial ground clearing and a critical engagement with global relativism—I evaluate three traditional strategies for motivating epistemic relativism. These are, (i) arguments that appeal in some way to the Pyrrhonian problematic; (ii) arguments that appeal to apparently irreconcilable disagreements (e.g., Galileo versus Bellarmine); and (iii) arguments that appeal to the alleged incommensurability of epistemic systems or frameworks.

I argue over the course of several chapters that a common weakness of these more traditional argument strategies for epistemic relativism is that they fail to decisively motivate relativism over scepticism. Interestingly, though, this style of objection cannot be effectively redeployed against a more contemporary, linguistically motivated form of epistemic relativism, defended most influentially by John MacFarlane (e.g., 2014).

Tuesday 17 January 2017

Biological Function and Epistemic Normativity



Ema Sullivan-Bissett (pictured above) is Lecturer in Philosophy at the University of Birmingham, having previously worked as a Postdoctoral Research Fellow on project PERFECT. In this post she summarises her paper ‘Biological Function and Epistemic Normativity’, forthcoming in a special issue of Philosophical Explorations on False but Useful Beliefs. Alongside Lisa Bortolotti, Ema guest edited this special issue which is inspired by project PERFECT’s interests in belief.

In my paper I give a biological account of epistemic normativity. I am interested in explaining two features:


(EN1) Beliefs have truth as their standard of correctness.

(EN2) There are sui generis categorical epistemic norms.

Thursday 12 January 2017

Chandaria Lectures: Andy Clark

In this post, Sophie Stammers reports from the Chandaria Lectures, hosted by the School of Advanced Study at the University of London. Professor Andy Clark, of the University of Edinburgh, gave the annual lecture, where he introduced the notion of ‘predictive processing’. Over the course of the three lectures, he put forward the case for understanding many of the core information processing strategies that underlie perception, thought and action as integrated through the predictive processing framework.


On a model of perception popular with Cartesians, and undoubtedly dominant in areas of the cannon that I was acquainted with as an undergraduate, perception is something of a passive business. Perceivers employ malleable receptor systems that (aim to) faithfully imprint the world as it is, delivering a raw stream of information that is made sense of downstream in later processing. Clark dubs this the “cognitive couch potato view”. Despite its past popularity, this view seems incompatible with evidence from multiple research streams in cognitive science which indicate that perceivers are far from passive, and bring many of their own expectations to the table. Predictive processing (PP) aims to provide a story which both accounts for and unifies these findings, whilst also doing justice to the human experience in the midst of it all.  

PP systems don’t just take in sensory information from the world, they are constantly trying to actively predict the present sensory signals with use of probabilistic models. Incoming sensory signals are met by a flow of top-down prediction, and when this matches the sensory barrage, the system has unearthed the most likely set of causes that would give rise to the particular experience. “Prediction errors” (information about mismatches between current prediction and sensory information) indicate a gap in the predictive model, and that a new hypothesis should be selected to accommodate the current sensory signal.

Maybe, rich world-revealing perceptions – as of tables, chairs, conversations, lovers, etc – only arise from the otherwise indiscriminate sensory barrage when the incoming sensory signal can be matched with top down predictions.

Tuesday 10 January 2017

Rational Hope


In this post Miriam McCormick (pictured above), Associate Professor of Philosophy and Philosophy, Politics, Economics and Law (PPEL) at the University of Richmond, summarises her paper on "Rational Hope", which is forthcoming in a special issue of Philosophical Explorations on false but useful beliefs. The special issue is guest edited by Lisa Bortolotti and Ema Sullivan-Bissett and is inspired by project PERFECT's interests in belief.

History and literature are filled with examples of people in horrific, desperate situations where having hope seemed essential for their survival. And yet, holding on to “false hope” can be devastating and can also condone inaction, where hoping for a change replaces working for that change. It also can seem that finding hope in a hopeless situation must be irrational. Is the choice between rational despair and irrational hope? I don’t think so; there are times when hope is rational but it is not always so. My main aim in this paper is to specify conditions that distinguish rational, or justified, hope from irrational, or unjustified hope. I begin by giving a brief characterization of hope and then turn to offering some criteria of rational hope.

Thursday 5 January 2017

Bedlam: the Asylum and Beyond

From 15 September 2016 to 15 January 2017 the Wellcome Collection is hosting an exhibition entitled: Bedlam, the Asylum and Beyond, exploring the recent history of psychiatry via the evolution of one asylum, the Bethlem Royal Hospital in London, often just called 'Bedlam' (see image of the Hospital below, Wellcome Images).



The exhibition is very rich and what is most striking about it is that it offers the perspective of those facing mental health issues on the asylum, and on psychiatry in general, through beautiful artworks and photography, and in the stunning audio companion. One of the premises of the exhibition is that asylums are regarded today 'hell on earth', places where cruel practices were tolerated and the main aim was not to treat people so that they could recover and go back to their lives, but control them and limit their freedom (see 'Scene at Bedlam' below, Wellcome Images).  



I was really impressed by a series of vignettes by Ugo Guarino, an Italian artist and illustrator who participated in the anti-psychiatry movement. The series is called "Zitti e buoni", an expression meaning "Be quiet and behave", and criticises some psychiatric interventions delimiting people's freedom. In one vignette a doctor opens a man's skull and probes his brain with tools, with the caption: "In nome della scienza", meaning "In the name of science".  Guarino befriended Franco Basaglia, the influential Italian psychiatrist who proposed the closure of all psychiatric hospitals in the seventies, and his work is of great historical importance in the evolution of contemporary thought about psychiatric care.

One provocative idea is that there may be something about the asylum that is not so negative and can be explored and developed at this critical time for psychiatry, when people still struggle to get better despite the plurality of treatment options available to them. So the final message of the exhibition is a message of hope. Modern-day Bedlam has space dedicated to art and craft, and is a much more welcoming place than it once was. 

We can take the thought of the good asylum even further, with Madlove, an ideal asylum designed on the basis of the suggestions of experts by experience. The project, run by Hannah Hull and the Vacuum Cleaner, ends with a colourful model of an asylum that is accepting of individual differences and a safe space for experimenting with images and sounds. Or rather, the project is open-ended as visitors are invited to submit their own suggestions about what the ideal asylum should be like.

Another message of hope comes from repeated references to Geel (see the recent Aeon essay on it), the market town in Belgium where there has been what we could call a 'community approach to mental health' for centuries. People with mental health issues are welcomed into the inhabitants' homes as 'boarders' and participate fully in the life and work of the family. This response to mental health issues is a source of inspiration for many.

You still have time to see the exhibition, free at the Wellcome Collection. Don't miss it!



Tuesday 3 January 2017

Responding to Stereotyping


In this post Kathy Puddifoot (pictured above), Research Fellow at the University of Birmingham, summarises her paper on "Responding to Stereotyping", which is forthcoming in a special issue of Philosophical Explorations on false but useful beliefs. The special issue is guest edited by Lisa Bortolotti and Ema Sullivan-Bissett and is inspired by project PERFECT's interests in belief.

Women occupy only thirteen percent of jobs in scientific fields in the United Kingdom. Suppose that as a result of being exposed to accurate depictions of this situation, say, in the news media, you form a stereotype associating science with men. This association influences your automatic responses to individuals. For example, if you hear about a great feat of engineering you automatically assume that the person who achieved it is a man. Is it a good thing that your judgements are automatically influenced by this scientist stereotype?

A natural thought is that if you want to be egalitarian, doing the ethical thing, then you should not engage in stereotyping. You should assume, for example, that any achievement in engineering is equally likely to be achieved by a man or a woman. In contrast, if your aim is to make correct judgements, it is not a bad thing to be influenced by the stereotype because it reflects reality. My paper challenges this thought.

Monday 2 January 2017

The Optimist Within

This post is by Leslie van der Leer (Regent's University London) who recently wrote a paper entitled, "The Optimist Within", together with Ryan McKay (Royal Holloway). The paper is to appear in a special issue of Consciousness and Cognition on unrealistic optimism, guest edited by Anneli Jefferson, Lisa Bortolotti, and Bojana Kuzmanovic.




“Let me think it over”, you say as the travel agent advises you to buy insurance. But is this wise? Can we get a more accurate representation of our risks by ‘thinking them over’?

Studies by Vul and Pashler (2008) and Herzog and Hertwig (2009) suggest the answer is “Yes”, provided that we combine our considered risk estimate with our initial estimate. These authors found that the well-known “wisdom-of-the-crowd” effect also applies within a single mind. The wisdom-of-the-crowd effect appears when the average of several judges’ estimates (e.g., estimates of the weight of an Ox; Galton, 1907) is more accurate than each individual estimate, on average. In the “crowd-within” effect, the average of two estimates provided by a single person has a smaller error than either of the errors of the individual estimates on average. The implication is that people provide these estimates based on random samples from an internal distribution, where each sample’s mean resembles an independent judge’s estimate. On this basis, ‘thinking things over’ (and averaging estimates) would make us more accurate.

Leslie

Whereas existing studies on the crowd within involve repeated estimates of neutral items (e.g. “The area of the USA is what percentage of the area of the Pacific Ocean?”), we (Van der Leer & McKay, in press) also investigated what happened on the second estimate for self-relevant negative events (e.g., “What is the chance that you will die before 90?”): would random errors in direction cancel out (producing a crowd-within effect) or would participants sample selectively, producing systematically more optimistic estimates the second time around? After providing an initial estimate, participants in our study were asked to assume this estimate was wrong and to provide a second, different estimate. Estimates were incentivized for accuracy, to counter a motivation to costlessly signal a lower risk (e.g., to the experimenter).

Ryan

We found that first and second estimates for neutral questions were not systematically different. Yet, second estimates for undesirable questions were more optimistic than first estimates. This suggests that participants were sampling selectively – rather than randomly – from their internal probability distribution when providing estimates for undesirable events. Our results indicate that people “deceive” themselves by judiciously selecting rosy estimates of their future prospects. Despite this self-deceptive selective sampling, we did find that the average of the two estimates had a smaller error than either estimate alone (i.e., crowd-within effect). This suggests you might arrive at a more accurate estimate of your risk if you think things over and take an average, before getting back to your travel agent.