Thursday 27 April 2017

Interview with Thomas Sturm on the Science of Rationality and the Rationality of Science

In this post Andrea Polonioli interviews Thomas Sturm (pictured below), ICREA Research Professor at the Department of Philosophy at the Universitat Autónoma de Barcelona (UAB) and member of the UAB's Center for History of Science (CEHIC). His research centers on the relation between philosophy and psychology, including their history. Here, we discuss his views on empirical research on human rationality.


AP: The psychology of judgment and decision-making has been divided into what appear to be radically different perspectives on human rationality. Whilst research programs like heuristics and biases have been associated with a rather bleak picture of human rationality, Gerd Gigerenzer and his colleagues have argued that very simple heuristics can make us smart. Yet, some philosophers have also argued that, upon close scrutiny, these research programs do not share any real disagreement. What is your take on the so-called “rationality wars” in psychology?

TS: Let me begin with a terminological remark. I would like to refrain from further using the terminology of “rationality wars”. It was introduced by Richard Samuels, Stephen Stich, and Michael Bishop (SSB hereafter) in 2002, and I have used their expression too without criticizing it. In academic circles, we may think that such language creates no problems, and I hate to spoil the fun. But because theories of rationality have such a broad significance in science and society, there is a responsibility to educate the public, and not to hype things. Researchers are not at war with one another. Insofar as a dispute becomes heated, if fights for funding and recognition play a role, then we should speak openly about this, tame down our language, and not create a show or a hype. We should discuss matters clearly and critically, end of story.

Now, I study this debate, which has many aspects, with fascination. It’s fascinating because they concern a most important concept of science and social life, adding fresh perspectives to philosophical debates that have occasionally become too sterile. And the debates are so interesting because they provide ample materials for philosophy of science.

AP: Can you explain what you have in mind here?

TS: There are many things one could point out here, but the perhaps philosophically most interesting aspect of the debate is fueled by puzzles and paradoxes in our ordinary concept of rationality. People should know the background of this. Much of the present debate is a long-term reaction to the attempt to explicate a notion of rationality that represents how an ideal reasoner, equipped with unlimited temporal, cognitive, and other resources would judge and decide – and how we finite reasoners should judge and decide. This notion was mostly developed during the mid-20th century through a confluence of modern logic as developed since Frege, probability theory, and the economic theory of games and decisions, as presented by von Neumann and Morgenstern in 1947. 

The attributes of the desired theory vary from author to author and discipline to discipline, but it is not historically inaccurate to say that there was an aim to develop an account that would be formal, optimizing, algorithmic, and mechanic (i.e. implementable on computers). So, let’s call this the ‘FOAM’ theory, or group of theories, of rationality. FOAM theories quickly ran into serious problems – such as Allais’ paradox, Ellsberg’s paradox, Newcomb’s problem, or Amartya Sen’s criticism of “property Alpha”, i. e., the condition of the independence of irrelevant alternatives. Some of challenges go back to early modern times, such as the St. Petersburg paradox, but in the 20th century they proliferated. The more precisely FOAM theories were spelled out, the easier it became to raise objections to them. Now, if the theories in question were taken to be descriptive, these problems could be seen as empirical counterexamples, and then one would have to look for an alternative explanation of judgment and decision-making. 

If, on the other hand, one took the theories to be normative, then the same problems could be viewed as paradoxes, revealing that our concept of rationality isn’t homogeneous, or as indicating that we have conflicting normative intuitions about how people should reason, judge, and decide. This is the most basic philosophical source of the current debate: Followers of the “heuristics and biases” program, while agreeing that the theories are descriptively inadequate, view FOAM theories as normatively adequate, and choose to ignore normative disputes. Critics think that we need not necessarily take a particular FOAM theory for granted. The fast-and-frugal heuristics or “bounded rationality” program, however, is more critical about standard norms of rationality. Sometimes it is stated such that FOAM theories should generally be abolished. Sometimes the objection is only to one particular FOAM theory or part of it, or even only to specific applications of such a particular theory, not to its normative validity.

Now, SSB thought the debate is about how far we are rational. This is an empirical question: How often do we follow norms from logic or probability theory, how often do we fail to do so? SSB tried to resolve the psychological dispute by showing that the competing research programs really agree about all their core claims, and that the differences are only “rhetorical flourishings”. Stated differently, SSB argued that for Kahneman and Tversky the glass of human rationality looks, sadly, half empty; for Gigerenzer, the glass appears to be, happily, half full. So, the debaters would indeed be under an illusion. SSB’s analysis was detailed, but in my opinion misinterpreted some of the claims of both parties. For instance, they put Gigerenzer in the camp of evolutionary psychology, which he is not really committed to. More importantly, SSB did not properly address the core conceptual and normative issues. They left out of sight what the glass really is: What is rationality? What constitutes reasoning at all? And what are core standards of good reasoning? These questions make the debate so philosophically fascinating. (I should mention that Bishop’s views have changed, as he told me in conversation.)

Let me add in what respect I find some philosophical reactions to the debate, namely those from certain – not all – naturalists, unconvincing. I don’t mean the kind of naturalism that claims that there are no supernatural entities. Very few people would deny that. I mean the kind of naturalism that claims that we can explain everything by the theories and methods of the sciences, and also the naturalism that claims that all philosophical questions – say, about reason or rationality – can be answered by the methods of science. Such naturalism often takes the form of following the latest developments in science in too uncritical ways. Whatever the rationality debates prove, they show that we cannot always take science at face value. Cognitive scientists, economists and other social scientists have been aware that the dispute isn’t simply about empirical questions. The issues are philosophical ones: conceptual, methodological, normative. While some scientists themselves are trying to address them, they cannot do this by the standard methods of their fields because those methods presuppose nonempirical assumptions about what rationality is, and how to study it.

AP: In what ways do you think philosophers can contribute to psychological research on judgment and decision-making?

TS: The standard answer here is, of course: Philosophers can help with their expertise in dealing with conceptual puzzles, with normative issues (such as arising in the methodology of science), and, in general, with critical thinking about questions which science cannot answer by its own methods. This answer is basically correct, but must be adapted to the contexts in which philosophers are asked for advice. With respect to psychological debates about rationality, the main caveat concerns the special methods of psychology, as well as the ongoing, rich and complex intrapsychological debates about them. Philosophers do not learn anything about this from textbooks in philosophy of science. For instance, I studied with influential German and American philosophers of science, such as Lorenz Krüger and Philip Kitcher, but their core expertise concerns physics and biology, respectively. 

These areas, plus their neighboring fields such as astronomy or chemistry, provide the majority of teaching materials in philosophy of science up until today. I do not mean that there are no domain-general topics in philosophy of science: theory-ladenness of data, undetermination of theories by data, or experimental artifacts are problems in physics just as much as in psychology, no doubt. But some important issues in the rationality debate concern specifics of the proper understanding and uses of probability theory, or the roles of contextual factors in the explanation of experimental results, among other things – and then also important facts of the sociology of science, such as potentially distorting citation practices.

AP: What would be an example of such a distortion?

TS: A famous article by Kahneman and Tversky from 1974 seems to have been excessively cited, partly because it was published in Science. It made their claims concerning the ubiquity of fallacies or biases in reasoning highly popular, burying opposing results and studies underneath. In the early 1990s, for instance, Lola Lopes advanced some smart and sharp criticisms of the heuristics-and-biases approach, but she could not get her papers into the highest ranking journals. Her work often remains ignored. So, there may be a citation bias in the research about human biases in reasoning. I am formulating carefully here: The issue is a disputed one, but it is highly important, and it would deserve close scrutiny by someone.

AP: Back to our original question: What can philosophers contribute?

TS: We can answer this, in part, by looking at what they have contributed in the past. Some contributions were more like sideeffects of their works, while others were intentional. Of the first kind, the example that comes to my mind is Grice’s work on maxims of conversation. This has frequently been cited by psychologists who doubt that the experiments showing mistakes in reasoning are methodologically sound: at least some of the experiments produce results that could be, by applying Gricean maxims, be interpreted more charitably, more rational. Of the second kind, I think of Sen’s criticism of “property Alpha” – a clear counterexample showing that a particular condition of rational choice is far from being uncontroversial. But this objection came from someone who is a philosopher and a scientist at the same time.

So, philosophy of science as it usually is now is insufficient. But there are a number of classical authors at the interfaces of philosophy and psychology that I recommend to read, such as Karl Bühler, Kurt Lewin, Egon Brunswik, Paul Meehl, or currently Klaus Fiedler, Joel Michell, or Gigerenzer. The latter is not only a participant in the rationality debates, but also a shrewd historian and philosopher of psychology. All of them had philosophical training or have worked with philosophers over their careers. I keep on learning about methodological issues from such psychologists. So philosophers can help to enforce critical thinking, but they must do this in close cooperation with those scientists who think philosophically about their field. By the way: If that is a version of naturalism, it’s one that gives distinctive, critical weight to philosophy.

AP: And what scientific findings on human reasoning do you think should receive most attention within the philosophical community?

TS: I should better say which findings should not be paid much attention to, and how we should not use psychological research more generally. There are two studies that philosophers have pointed to endlessly: the Wason selection task and the Linda problem. Both have led to doubtful “findings”. However, these reasoning tasks are not too hard to explain, and so they get used time and again. To use Thomas Kuhn’s expression, they have become paradigms of bad reasoning that philosophers who wish to build upon psychological research cite – they are both as much particular exemplars of problem-solving as they also contain the methodological and theoretical and axiological assumptions of the research built around them. If you don’t know these, you cannot take part in the discussion. 

So, OK, know the paradigm, and if you don’t, look it up on the internet. But here comes the difference to real Kuhnian paradigms: it’s far from clear that they can be used as good models for further research. True, psychologists from the heuristics-and-biases camp have used them in this way. But no true theory has emerged from that research. Heuristics such as representativeness or availability are mere names for processes that we do not understand. Also, we have a collection of heuristics, but no idea of how they could be used to form a systematic theory of human judgment and decision-making.

AP: What does that mean, for instance, for the interpretation of results in the Linda problem?

TS: If Kahneman and Tversky are right, then subjects here are misled because they take into account the description of Linda as smart, being trained in philosophy, politically interested, and as having participated in antinuclear demonstrations, where this information actually should not guide their reasoning about whether it’s more probable that Linda is a feminist bank teller than that she is a bank teller; and so subjects commit “conjunction fallacies”. However, we might as well say that subjects view the information as possible evidence, given to them in order to solve the reasoning task about what’s more probable in the sense of “credible”, as opposed to mathematical probabilities. In other heuristics-and-biases studies, we have been warned against the sin called “belief perseverance” – the stubborn inclination to believe something that is actually undermined by the evidence. 

Surely you ought to pay attention to the evidence! But if subjects faced with the Linda problem do that, they are blamed to commit an error. That’s what I would call blind or uncritical naturalism in the use of psychological research by philosophers. There is an abundance of possibilities to describe the reasoning behavior of subjects such that it is quite rational or reasonable. Note 1: I am not saying here there are no errors. I am saying that there is no unified theory. Note 2: You may say I just talked about the Linda problem at length while I said people should stop doing so. But perhaps you can now see why, or in what way we should not talk about the Linda problem: We should no longer blindly cite it, or the numerous similar kinds of studies about other reasoning norms, as good empirical evidence for the claim that people are bad reasoners. In a sense, that is what naturalistic philosophers like Stich and others have done. This should end.

The same for all other findings or “findings”. It is not the job of the philosopher to simply report what psychologists have found out, and to claim we can derive a naturalistic account of rationality from that. We should rather help people to think things through for themselves!

AP: We have learnt quite a lot over the past decades from the science of rationality. But as a philosopher of science, what do you think that scientific research on reasoning can tell us about the rationality of science?

TS: That’s a highly interesting question, about which there isn’t much work so far. My current thinking here is the following. While a lot of psychological research has been devoted to the reasoning of both laypersons and experts, partly scientific experts, the contributions that could help us to explain, or even improve, the rationality of science, are close to zero. That is, I think, true for both the heuristics-and-biases and the fast-and-frugal heuristics approaches. Consider the former first. Here, it is assumed that we know what the standards of good or valid reasoning are. Otherwise the experiments could not get off the ground in the first place. So, Kahneman, Tversky, and their collaborators did not care or dare to ask how psychological research might be useful for understanding the rationality of science. In philosophy, some have tried to use heuristics and biases for this, attempting to support naturalism in philosophy of science. 

How did this work out? For instance, Miriam Solomon has used the concepts of belief perseverance, representativeness, or availability in order to explain the normative successes, the rationality of choices made by scientists in the geological revolution of the 19th century. This is virtually the opposite of what the heuristics-and-biases program does! Also, Solomon just applied the concept, say, of belief perseverance to the few geologists who trusted in continental drift long before sufficient evidence was in. As so often with applications of heuristics and biases, people apply the terminology without having a clear normative standard against which to measure subject’s behavior. But we should recognize that sometimes there simply are no such norms, and science at its research frontiers is a good example for this. We should consequently not use the language of heuristics and biases, or at least not pretend we could thereby explain the rationality of theory change in science.

On the other side, adherents of bounded rationality have not studied much what heuristics scientists use, or could use. Bill Wimsatt’s work is an exception, and Peter Todd’s upcoming study on Darwin’s notebooks may be one too. The main reason for the difficulty of applying fast and frugal heuristics is that such heuristics are often based either on long experience, or on a fit between the environment and our modes of thinking. Science, however, constantly tries to innovate itself. Understanding how scientific innovation is possible, or can be understood rationally, is nothing any of the existing theories of rationality, descriptive or normative, can provide for. To my mind, to achieve progress here, it would need an approach that closely integrates philosophy of science with history of science, and both with theories of rationality.

AP: Are you currently doing any research on human rationality?

TS: Yes, and lot’s! My projects fall into various related areas. The last topic we just discussed is one of them. So far, I have only one paper in the pipeline, about the (im)possibility of understanding scientific innovation rationally. Maybe more will come. I like to go where the arguments carry me. Then, in our HPS group here in Barcelona, we are currently discussing the history and philosophy of psychological theories and debates, trying especially to understand what form of naturalism in epistemology or the philosophy of science might be developed from it. How have normative and descriptive theories been developed at the interfaces of philosophy and science? How have naturalists tried to adapt, time and again, to new empirical knowledge about human reasoning, and what contributions philosophers have actually made? And could there really be a science of rationality?

Then, I aim to connect current debates over rationality to the history of the concepts of reason and rationality as they have been developed and revised, time and again, in the philosophical tradition. Immanuel Kant’s understanding of reason is particularly central to me here, partly due to my other research in the history of philosophy, but also because he is just such a rich thinker in this area. Philosophers, with very few exceptions, do not think about it in the light of the different conceptions of rationality now on the market. I think this might be heuristically useful, though one must be careful to avoid reading him too anachronistically. I once stumbled over an article by C.W. Churchman– nowadays mostly forgotten, but he was influential as an editor for the journal Philosophy of Science, and as an operations researcher. In 1971, Churchman asked “Was Kant a decision theorist?” He translated several of Kant’s objections to teleological ethical theories into assumptions of 20th-century rational choice theory. 

This is intriguing, because it makes Kant’s rejection of consequentialism in ethics look more reasonable. Another example stems from attempts to read his philosophy of history in game-theoretic terms. Such readings seem to put Kant into the camp of FOAM theories in the 20th century, and perhaps reinforce the stereotype of him as the “man of the clock”, the man who would pedantically stick to rules in all areas of life. But Kant says that we cannot and should not apply rules of reason mechanically. It takes another cognitive faculty (which he gives different names, such as “seasoned judgment” or “motherwit”) to apply rules intelligently. That is part of what he means when he says that reason has to constantly criticize itself, to study its foundations and limits. This continues to be very useful advice.

No comments:

Post a Comment

Comments are moderated.