Thursday, 20 October 2016

On Memory Errors: An Interview with Sarah K Robins

Today's blog post is an interview by Project PERFECT research fellow Kathy Puddifoot with Sarah K. Robins (pictured below), an expert on false memories and Assistant Professor of Philosophy at the University of Kansas.

KP: You are an expert on memory. How did you become interested in this topic?

SR: I became interested in memory as I was starting to put together a dissertation back in graduate school. Originally, my interest was in the personal/subpersonal distinction but I was spinning my wheels a bit. My advisor, Carl Craver, posed a question to help get me going: are memory traces personal or subpersonal? In pursuit of that question (still a difficult one to answer), my interest shifted to memory itself. There were so many interesting philosophical questions about memory—and so little connection with the vast amount of research on memory in both psychology and neuroscience. I was excited about how little work had yet been done on these intersections and that excitement has kept me going to this day.

KP: Your work focuses on cases of memory error, which you call cases of misremembering. What is distinctive about cases of misremembering?

SR: I’m fascinated by memory errors in general because they serve as an instance of a general rule for inquiring into cognitive and biological systems—you can learn a lot about how and why they work by observing what happens when they break. To this end, I’m interested in moving beyond broad discussions of false memory to promote a more refined taxonomy of memory errors that distinguishes all of the various ways that attempts to remember can go awry.

Misremembering errors struck me as a good place to start on this larger project because they are easily produced and distinctive, plus they have a quasi-paradoxical nature that makes them especially appealing to philosophers. Misremembering errors, as I characterize them, are an interesting blend of success and failure—they are errors that rely on retention. Specifically, I define misremembering as “a memory error that relies on successful retention of the targeted event. When a person misremembers, her report is inaccurate, yet this inaccuracy is explicable only on the assumption that she has retained information from the event her representation mischaracterizes” (2016: 434).

Tuesday, 18 October 2016

Project PERFECT Year 3: Kathy

During my first year on Project PERFECT I have had the opportunity to explore a number of avenues of research relating to the epistemic benefits of imperfect cognitions.

Falsity-Dependent Truths in Memory and Social Cognition

I have been collaborating with Lisa on a project on memory distortions; cases in which people appear to remember things from the past but the memories are inaccurate. The memories often have a kernel of truth but at least some of the details are false. Many previous discussions of the phenomenon have focused on evolutionary advantages and psychological gains associated with having false memories. For example, it has been emphasised that having false beliefs about the quality of one’s own performance on a task could have psychological benefits by increasing our wellbeing.

Our focus has instead been on identifying epistemic gains associated with having false memories. For example, it has been argued that many false memories are the result of cognitive mechanisms that are useful for imagining the future being used to represent the past, leading us to falsely believe that things that we only imagined really happened. In this case, we focus on how there can be gains in terms of knowledge and understanding that are associated with being able to imagine future events, and how these gains are associated with the distorted memories.

In our upcoming work, we will be applying the notion of epistemic innocence to understand the nature of the epistemic gains associated with memory distortions. Something is epistemically innocent if it meets the following description: although it is epistemically costly because it involves, for example, misrepresenting reality, it can bring substantial epistemic gains that would otherwise be absent. Past discussions of epistemic innocence have focused on the epistemic innocence of cognitions: e.g. the ways that specific delusions or confabulated beliefs can bring epistemic gains. But our research on memory distortion considers how cognitive mechanisms can be epistemically innocent: how it can be epistemically costly to have a particular cognitive mechanism but the possession of the mechanism can bring epistemic gains that would not have been acquired otherwise.

We think that the application of the notion of epistemic innocence to cognitive mechanisms can clarify what occurs in the case of memory distortions and capture the precise nature of the epistemic advantages associated with the phenomenon. For example, where a cognitive mechanism both facilitates the imagination of future events and causes memory distortions, the mechanism can be viewed as epistemically innocent, because it brings benefits in terms of allowing us knowledge and understanding about the future, even if particular distorted memories do not bring benefits.

Thursday, 13 October 2016

Interview with Ralph Hertwig on Biases, Ignorance and Adaptive Rationality

In this post I am pleased to interview Ralph Hertwig (pictured below), director of the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.

AP: According to popular accounts offered in the field of judgment and decision-making, people are prone to cognitive biases, and such biases are conducive to maladaptive behaviour. Based on your research, to what extent the claim that cognitive biases are costly is warranted by available evidence? If you had to identify one particular bias that is especially worrisome, because it typically results in negative real life outcomes, which one would this be?

RH: This is a hotly debated topic in research on behavioral decision making and beyond. Many cognitive biases have been defined as such because they violate coherence norms, under the assumption that a single syntactical rule such as consistency, transitivity, the conjunction rule, or Bayes’ rule suffices to evaluate behavior. I believe that such coherence-based norms are of limited value for evaluating behavior as rational. Specifically, we have argued that there is little evidence that coherence violations are costly, or that if they were, people would fail to learn to avoid them. Second, we have suggested that adaptive rules of behavior can in fact imply incoherence, and that computational intractability and conflicting goals can make coherence unattainable. Yet this does not mean that coherence is without value. I think coherence plays a key role in situations where it is instrumental in achieving functional goals, such as fairness and predictability. But I do not believe that coherence should be treated as a universal benchmark of rationality.

Instead, smart choices need to be defined in terms of ecological rationality, which requires an analysis of the environmental structure and its match with the available cognitive strategies. Of course, this does not mean that people do not make mistakes—but the issue is not whether a cognitive strategy is rational or irrational per se but rather under which environmental conditions a particular strategy works or fails to work. What could happen is that a strategy that used to function well in the past no longer works because the environment has changed. This can indeed lead to costly errors. Take, for instance, the strategy of trusting experts such as doctors. In a world in which doctors’ and patients’ interests were aligned, this was a good strategy. In a world in which their interests can, for various reasons (monetary or legal), be systematically at odds, this strategy will fail.

More on this topic can be found here:

Hertwig, R., Hoffrage, U., & the ABC Research Group (2013). Simple heuristics in a social world. New York: Oxford University Press.

Tuesday, 11 October 2016

Project PERFECT Year 3: Andrea

My name is Andrea Polonioli and I recently joined the Philosophy Department at the University of Birmingham as a Research Fellow. I am extremely excited to be working under the mentorship of Lisa Bortolotti and on this fantastic project exploring the Pragmatic and Epistemic Role of Factually Erroneous Cognitions and Thoughts (PERFECT).

Until now, most of my research has focused on the following two questions: What does it mean to be rational? To what extent are we rational? During my PhD at the University of Edinburgh, I explored these questions mainly considering literature on judgment and decision-making in nonclinical populations. As it turns out, researchers in the field of judgment and decision-making often claim that to be rational means to reason according to formal principles based on logic, probability theory, and decision theory. In a few papers of mine, I defended the claim that formal principles of rationality are too narrow and abstract, and that behaviour should be assessed against the goals people entertain (e.g., 2016; 2014). At the same time, I have also argued that the pessimistic claims about human rationality often expressed in psychological research still need to be taken seriously, as people can often be remarkably unsuccessful at achieving their goals (e.g, forthcoming).

My plan for this year is to further explore the topics of human rationality and successful behavior considering both clinical and non-clinical populations. First, I will be focusing on judgment and decision-making in clinical populations, as exploring these populations and comparing them against non-clinical ones offers important ways to push forward the so called “rationality debate” in philosophy and cognitive science. Specifically, there is a significant body of evidence suggesting that clinical populations tend to experience worse life outcomes, and it seems key to disentangle different explanations for reported associations between those populations and negative life outcomes. In particular, I aim to explore the role played by cognitive biases and imperfect cognitions in shaping those associations.