Thursday 28 February 2019

Remembering from the Outside: Personal Memory and the Perspectival Mind

Christopher McCarroll is a Postdoctoral Researcher at the Centre for Philosophical Psychology, University of Antwerp. He works on memory and mental imagery, with a particular interest in perspective in memory imagery. In this blog post Chris talks about his recently published book Remembering From the Outside: Personal Memory and the Perspectival Mind.




In his 1883 study into psychological phenomena, Francis Galton described varieties in visual mental imagery. Writing about the fact that some people "have the power of combining in a single perception more than can be seen at any one moment by the two eyes", Galton notes that "A fourth class of persons have the habit of recalling scenes, not from the point of view whence they were observed, but from a distance, and they visualise their own selves as actors on the mental stage" (1883/1907: 68-69). Such people remember events from-the-outside. In the language of modern memory research such images are known as ‘observer perspective memories’. Not everybody has such imagery, but are you one of Galton’s ‘fourth class of persons’? Do you recall events from-the-outside?

This perspectival feature of memory is a puzzling one, and it raises many questions. If the self is viewed from-the-outside, then who is the observer, and in what way is the self observed? Are such memories still first-personal? What is the content of such observer perspective memories? How can I see myself in the remembered scene from a point of view that I didn’toccupy at the time of the original event? Indeed, can such observer perspectives be genuine memories? In the book I provide answers to such questions about perspective in personal memory.

There is now a broad consensus that personal memory is (re)constructive, and some of the puzzles of remembering from-the-outside can be explained by appealing to this feature of memory. Indeed, it is often suggested that observer perspectives are the products of reconstruction in memory at retrieval. But this, I suggest, is only part of the story. To better understand observer perspectives in particular, and personal memory more generally, we need to look not only at the context of retrieval, but also at the context of encoding. 

Tuesday 26 February 2019

Response to Ben Tappin and Stephen Gadsby

In this post, Daniel Williams, Postdoctoral Researcher in the Centre for Philosophical Psychology at the University of Antwerp, responds to last week's post from Ben Tappin and Stephen Gadsby about their recent paper "Biased belief in the Bayesian brain: A deeper look at the evidence". 


Ben Tappin and Stephen Gadsby have written an annoyingly good response to my paper, ‘Hierarchical Bayesian Models of Delusion’. Among other things, my paper claimed that there is little reason to think that belief formation in the neurotypical population is Bayesian. Tappin and Gadsby—along with Phil Corlett, and, in fact, just about everyone else I’ve spoken to about this—point out that my arguments for this claim were no good.

Specifically, I argued that phenomena such as confirmation bias, motivated reasoning and the so-called “backfire effect” are difficult to reconcile with Bayesian models of belief formation. Tappin and Gadsby point out that evidence for the backfire effect suggests that it is extremely rare, that confirmation bias as traditionally understood can be reconciled with Bayesian models, and that almost all purported evidence of motivated reasoning can be captured by Bayesian models under plausible assumptions.

To adjudicate this debate, one has to step back and ask: what kind of evidence *would* put pressure on Bayesian models of belief formation? Unfortunately this debate is often mired by reference to concepts like logical consistency and inconsistency (i.e. falsification), which are largely irrelevant to science. (In fact, they are profoundly un-Bayesian). As I mentioned in my paper, with suitable adjustments to model parameters, Bayesian models can be fitted to—that is, made logically consistent with—any data. 

The question is: which possible evidence should *weaken our confidence* in Bayesian models? Fortunately, Tappin and Gadsby don’t hold the view—surprisingly widespread in this debate—that there is nothing we could discover which should weaken our confidence in them. They concede, for example, that any genuine evidence for “motivated reasoning constitutes a clear challenge… to the assumption that human belief updating approximates Bayesian inference.”

Thursday 21 February 2019

Belief and Belief Formation Workshop

The Centre for Philosophical Psychology at the University of Antwerp held a workshop on the 27th November 2018 on the topic of belief and belief formation. Here’s a brief summary of the excellent talks given at the workshop, kindly written by Dan Williams.




Neil Levy (Oxford/Macquarie) gave the first talk, entitled ‘Not so hypocritical after all: how we change our minds without noticing’. Levy focused on a phenomenon that many people assume to be a form of hypocrisy—namely, cases in which individuals come to change their beliefs about, say, politics when popular opinion (or the popular opinion within their relevant tribe or coalition) changes. (Levy gave the example of many ‘Never Trumpers’ who then apparently changed their opinion of Trump when he came to power).

Levy argued that at least some examples of this phenomenon are in fact not best understood as a form of hypocrisy; rather, they arise from people forming beliefs “rationally”. Specifically, he drew attention to two important features of human belief formation: first, our evolutionary dependence on cumulative cultural evolution, and the suite of psychological mechanisms that facilitate the cultural learning that underlies it; second, the way in which we offload representational states such as beliefs onto the surrounding environment. 

These two features of human psychology, Levy argued, can help to explain many apparent examples of hypocrisy: when an individual radically changes his or her opinion on, say, Trump, this need not be an example of motivated reasoning, “tribalism”, or hypocrisy; rather, it can simply be a result of these—usually adaptive and truth-tracking—features of human psychology.

Eric Mandelbaum (CUNY) gave the second talk on ‘The Fragmentation of Belief’. Mandelbaum sought to develop a form of “psychofunctionalism”, according to which beliefs are best understood as real entities within the mind that play the functional role of beliefs as described by our best contemporary cognitive science. 

Psychofunctionalism has traditionally been held back, Mandelbaum argued, by the lack of concrete proposals on what the relevant psychological laws or regularities that actually govern belief formation consist in. To address this, Mandelbaum sought to sketch a cognitive architecture, focusing specifically on the issue of how beliefs are stored. 

At the core of his proposal was the idea that belief storage is highly fragmented; rather than a unified web of belief, he argued that our best research in cognitive science supports a view of our cognitive architecture as consisting of many distinct, independently accessible data structures which Mandelbaum calls ‘fragments’. 

This architecture, Mandelbaum argued, generates many psychological phenomena that standard “web of belief”-based theories struggle to account for, such as inconsistent beliefs, redundant beliefs, and distinct bodies of information on the same subject.

Tuesday 19 February 2019

Biased Belief in the Bayesian Brain

Today’s post comes from Ben Tappin, PhD candidate in the Morality and Beliefs Lab at Royal Holloway, University of London, and Stephen Gadsby, PhD Candidate in the Philosophy and Cognition Lab, Monash University, who discuss their paper recently published in Consciousness and Cognition, “Biased belief in the Bayesian brain: A deeper look at the evidence”.



Last year Dan Williams published a critique of recently popular hierarchical Bayesian models of delusion, which generated much debate on the pages of Imperfect Cognitions. In a recent article, we examined a particular aspect of Williams’ critique. Specifically, his argument that one cannot explain delusional beliefs as departures from approximate Bayesian inference, because belief formation in the neurotypical (healthy) mind is not Bayesian.

We are sympathetic to this critique. However, in our article we argue that canonical evidence of the phenomena discussed by Williams—in particular, evidence of the backfire effect, confirmation bias and motivated reasoning—does not convincingly demonstrate that neurotypical belief formation is not Bayesian.

The backfire effect describes the phenomenon where people become more confident in a belief after receiving information that contradicts that belief. As pointed out by Williams, this phenomenon is problematic for Bayesian models of belief formation insofar as new information should cause Bayesians to go towards the information in their belief updating, never away from it. (As an aside, this expectation is incorrect, e.g., see here or here).

We reviewed numerous recent studies where conditions for backfire were favourable (according to its theoretical basis), and found that observations of backfire were the rare exception—not the rule. Indeed, the results of these studies showed that by-and-large people updated their beliefs towards the new information, even if it was contrary to their prior beliefs and in a highly emotive domain.

Thursday 14 February 2019

Self-control, Decision Theory, and Rationality

This post is written by José Luis Bermúdez, who is Professor of Philosophy and Samuel Rhea Gammon Professor of Liberal Arts at Texas A&M University. Prof. Bermúdez has published seven single-author books and six edited volumes. His research interests are at the intersection of philosophy, psychology and neuroscience, focusing particularly on self-consciousness and rationality. 

In this post, he presents his new edited collection "Self-Control, Decision Theory, and Rationality" published by Cambridge University Press. 



Is it rational to exercise self-control? Is it rational to get out of bed to go for a run, even when staying in bed seems preferable at the time? To resist the temptation to have another drink? Or to forego a second slice of cake?

From a commonsense perspective, self-control is a way of avoiding weakness of will, and succumbing to weakness of will seems to be a paradigm of irrationality – something that involves a distinctive type of inconsistency and practical failure. This reflects a focus on rationality in choices over time – on keeping one’s commitments and following through on one’s plans.

But things can look very different when one narrows down to specific, individual choices. Then rational self-control seems hard to accommodate. After all, to exercise self-control is to go against your strongest desires at the moment of choice – and why should you not take what seems to be the most attractive option? From the perspective of orthodox decision theory, rationality requires you to maximize expected utility and (at the moment of choice) being weak-willed is what maximizes expected utility.

Tuesday 12 February 2019

OCD and Epistemic Anxiety

This post is authored by Juliette Vazard, a PhD candidate at the Center for Affective Sciences at the University of Geneva, and at the Institut Jean Nicod at the Ecole Normale Supérieure in Paris. In this post she discusses her paper “Epistemic Anxiety, Adaptive Cognition, and Obsessive-Compulsive Disorder” recently published in Discipline Filosofiche.


I am curious about what certain types of dysfunctional epistemic reasoning present in affective disorders might reveal about the role that emotions play in guiding our epistemic activities. Recently, my interest was drawn to the emotion of anxiety. Anxiety has often been understood as belonging to the domain of psychopathology, and the role of this emotion in the everyday lives of healthy individuals has long remained understudied. In this article I argue that anxiety plays an important role in guiding our everyday epistemic activities, and that when it is ill-calibrated, this is likely to result in maladaptive epistemic activities.

Anxiety is felt towards dangers or threats which are not immediately present, but could materialize in nearby possible worlds or in the future. Like other emotions, anxiety plays a motivational role in preparing us to act in response to the type of evaluation it makes. Because anxiety functions to make “harmful possibilities” salient, it prompts a readiness to face potential threats, as well as attempts to gain information (regarding its chances of materializing, its magnitude, specific nature, etc.)

I believe analyzing the nature and role of anxiety can enlighten us on the dysfunctional mechanisms at work in obsessive-compulsive disorder. OCD is a psychiatric disorder that most often implies obsessions “which are intrusive, unwanted thoughts, ideas, images, or impulses” and compulsions, which are “behavioural or mental rituals according to specified ‘rules’ or in response to obsessions” (Abramowitz, McKay, Taylor 2008, p. 5). Most interestingly, persons with OCD experience the need to secure more evidence and demand more information before they can reach a decision and claim knowledge (that the stove is off, for instance) (Stern et al. 2013; Banca et al. 2015).

Thursday 7 February 2019

Epistemic Innocence at ESPP

In September 2018, a team of Birmingham philosophers, comprising Kathy Puddifoot, Valeria Motta, Matilde Aliffi, EmaSullivan-Bissett and myself, were in sunny Rijeka, Croatia, to talk a whole lot of Epistemic Innocence at the European Society for Philosophy and Psychology.

Epistemic innocence is the idea at the heart of our research at Project PERFECT. A cognition is epistemically innocent if it is irrational or inaccurate and operates in ways that could increase the chance of acquiring knowledge or understanding, where alternative, less costly cognitions that bring the same benefits are unavailable. Over the last few years, researchers on the project and beyond have investigated the implications of epistemic innocence in a range of domains (see a list of relevant work here). Our epistemic innocence symposium at ESPP2018 was a mark of the relative maturity of the concept, and the opportunity for us to start expanding its applications.

I went first, exploring the phenomenon of confabulation, where a person gives an explanation that is not grounded in evidence, without any intention to deceive. Confabulatory explanations sometimes arise where there is cognitive decline, such as in dementia or brain injury, and also in a number of psychiatric conditions. But there are a range of studies which demonstrate that all of us, regardless of our cognitive function, regularly confabulate about all sorts of things from consumer choices to moral convictions and political decisions. 

I suggested that confabulation might emanate from a general tendency for producing narrative explanations which itself has a range of benefits. I think there might be considerable social and psychological benefits to such a tendency. But there is also evidence to suggest that various forms of narrative construction and story-telling can actually aid cognitive functioning and information retention. I argued that whilst our narrative tendency can produce fabrications which depart from reality, it also gives us a means for preserving information that we cannot so easily do when we don’t recruit narrative forms, and so we can think of this tendency as epistemically innocent. 




Next, Valeria Motta and Matilde Aliffi headed into uncharted waters for the notion of epistemic innocence, thinking about it in the context of emotions, and asking specifically whether there are any epistemic benefits to the emotion of loneliness. They first gave their account of how we should think about loneliness: as a painful subjective emotional state occurring when there is a discrepancy between desired and achieved patterns of social interaction. First, we need to know whether emotional states such as loneliness are rationally assessable. Matilde and Valeria convinced us that at least some emotions can be because they have intentional content, are receptive to evidence, and shape the subject's view of the world.

They argued that loneliness may be considered irrational when a subject feels lonely even when she has evidence that social interactions are available to her. They moved on to defend the position that even the cases that could be deemed irrational, present a number of epistemic benefits. Some people who experience loneliness can also experience an enhancement of self-knowledge, because they recognise the basic need for human contact and meaningful social relations. Valeria and Matilde argued, further, that there are cases of loneliness which are epistemically innocent, yet practically beneficial, because they foster the subject's motivation to explore new and creative possibilities to connect with people.




Ema Sullivan-Bissett was next up, applying the notion of epistemic innocence to the debate between one-factor and two-factor theorists as regards monothematic delusion formation. Empiricists about monothematic delusion formation agree that anomalous experience is a factor in the formation of these attitudes, but disagree markedly on which further factors (if any) need to be specified. (One-factor theorists think not; two-factor think so). Ema’s aim was not to resolve this debate, but to show that regardless of where you stand on it, epistemic innocence can be thought of as a unifying feature of monothematic delusions, insofar both opposing empiricist accounts can agree on the epistemic innocence of this class of attitudes.

This constitutes a new application of the concept of epistemic innocence, showing that the notion allows us to tell a richer story when investigating the epistemic status of monothematic delusions, one which resists the trade-off view of pragmatic benefits and epistemic costs. Though monothematic delusions are often characterised by appeal to their epistemic costs, they can play a positive epistemic role, a fact that, Ema argued, is independent of the characteristics of their formation, and as such is a conclusion which all empiricists can agree upon.





Kathy Puddifoot concluded our symposium presentations, taking the notion of epistemic innocence beyond the academy and research lab, to inform policy and practice out in the world – and an important bit of the world at that – the court room. Kathy first recruited findings from psychology which establish that eyewitnesses are susceptible to the misinformation effect: this is where people recollect that they experienced an event in a way that is consistent with information provided to them after the event itself.

Kathy argued that the misinformation effect is produced by cognitive mechanisms that are epistemically innocent: although the mechanisms produce errors, Kathy maintained that they also bring substantial epistemic benefits in so far as these mechanisms underlie broadly successful remembering in the first place. Kathy suggested that eyewitnesses can make errors when testifying due to the ordinary operation of these cognitive mechanisms that in general increase the chance of them providing correct details about a criminal case.

However, jurors are likely to judge the errors that result from the misinformation effect to indicate that the eyewitness is generally unreliable. Kathy then made the case for informing jurors about the psychological findings on the misinformation effect to help them better understand the kind of errors witnesses are likely to make, and that our cognitive shortcomings do not mean we are wholly unreliable rememberers.




So, there we have it: the notion of epistemic innocence prompting us to think about our narrative tendencies; the epistemic import of seemingly irrational instances of loneliness; as a means to find common ground between one-factor and two-factor theorists about delusion formation; and to potentially instruct jurors to help them better judge the reliability of eyewitness testimony.

What other applications might epistemic innocence have?

Tuesday 5 February 2019

The Epistemological Role of Recollective Memories

Today’s post is by Dorothea Debus, Senior Lecturer in the Department of Philosophy at the University of York.


Together with Kirk Michaelian and Denis Perrin I've recently edited a collection of newly commissioned papers in the philosophy of memory (New Directions in the Philosophy of Memory, Routledge 2018), and I've been invited to say something about my own contribution to that collection here.

My paper bears the title "Handle with Care: Activity, Passivity, and the Epistemological Role of Recollective Memories", and it is concerned with one particular type of memory, namely with memories that have experiential characteristics. The paper starts from the observation that such experiential or 'recollective' memories (here: 'R-memories') have characteristic features of activity as well as characteristic features of passivity

A subject who experiences an R-memory is characteristically passive with respect to the occurrence of the R-memory itself, but subjects nevertheless also can be, and often are, actively involved with respect to their R-memories in various ways. At the same time, R-memories also play an important epistemological role in our everyday mental lives: When making judgements about the past, we often do rely on our R-memories of relevant past events, and it also seems that compared to other kinds of memories, we take R-memories especially seriously and give them special weight and particular attention when making judgements about the past.

What is more, there are important links between the epistemological role which R-memories play on the one hand, and our R-memories' characteristic features of passivity and activity on the other, and in the paper at hand I suggest that we can understand both these aspects of R-memory better by setting out to understand them together.