Tuesday 25 November 2014

Towards a Theory of 'Adaptive Rationality'?

I am posting this on behalf of Andrea Polonioli, PhD student in Philosophy at the University of Edinburgh.


Andrea Polonioli


My PhD project analyzes some recent developments in the ‘rationality debate’, which originated as a reaction to the body of research that has followed Kahneman and Tversky’s work within the Heuristics-and-Biases project. Empirical evidence suggested that people are prone to widespread and systematic reasoning errors, and pessimistic views of human rationality have been quite popular in the psychological literature. However, this picture has also attracted fierce criticisms, and several researchers have recently questioned pessimistic assessments of human rationality by emphasizing the central importance of evolutionary considerations in our understanding of rationality. 

In a paper I recently published in Philosophy of the Social Sciences I present some steps already taken in my project. In particular, I critically discuss some research that has come together under the umbrella term of “adaptive rationality” (AR) (e.g., Todd and Gigerenzer 2012). According to this view, people should not be assessed against norms of logic, probability theory, and decision theory, but rather against the goals they entertain. Moreover, the conclusion that people are irrational is seen as unsupported: people are often remarkably successful once assessed against their goals and given the cognitive and external constraints imposed by the environment.

I suggest that these theorists are right in arguing for a conceptual revolution in the study of human rationality. To show this, I start by considering the ways in which decision scientists tend to justify traditional norms of rationality. A prominent justification is pragmatic: it is commonly argued that if people violate such norms, they will incur costs. But these links are open to empirical testing. As Baron puts it:

'If it should turn out […] that carefully violating the laws of logic at every turn leads to eternal happiness, then it is these violations that should be called rational' (Baron 2000: 53).

AR theorists take this possibility seriously. Their research shows that behaviour that violates norms of rationality can be successful once measured against epistemic and individual goals. AR theorists seem right in claiming that, given the pragmatic premises, we have no grounds for considering those behaviours as irrational.

But where does this lead us? According to AR theorists, we should think of human cognition as adaptive and successful. This is problematic, though. In fact, empirical evidence suggests that people can be quite unsuccessful at achieving their goals. To take an example, let’s go back to the quote from Baron. Are people really good at predicting what will make them happy? In recent years decision scientists have started to directly study contexts when decisions succeed and fail to maximize happiness. An important result seems to be that people often fail to make choices that maximize their happiness. By looking at this as well as other examples, I try to show that in some important contexts people are, after all, ‘adaptively irrational’.


6 comments:

  1. Hi Andrea and thank you for such an interesting post.

    I was wondering whether in your work you distinguish epistemic rationality from agential success. One of the implications of the fast-and-frugal framework as I understand it is that epistemic rationality and agential success substantially overlap. But there are many circumstances in which we seem to have true and justified beliefs that do not do us any good (e.g., depressive realism), or false and unjustified beliefs that further our goals (e.g., unrealistic optimism).

    What's your take on these cases? Would unrealistic optimism be a case of adaptive irrationality?

    ReplyDelete
  2. Thank you very much for your interest!

    While the term ‘success’ is often taken to refer to practical success only (achieving one’ s desires), excluding cognitive aims and standards, I follow adaptive rationality theorists and take ‘success’ to refer not only to the fulfillment of desires and prudential goals, but also to the achievement of epistemic goals.

    Now, as far as I understand, adaptive rationality theorists do assume that making empirically accurate predictions is generally conducive to success (both evolutionary and agential success). I also believe that this is controversial, though, as there seem to be cases where inaccurate beliefs can further some of our goals. Now, these cases seem to be at odds with their claim that empirically accurate beliefs are adaptive. At the same time, these cases offer an argument that adaptive rationality theorists can use to attack mainstream research on cognitive biases and irrationality. Specifically, it can be argued that flawed self-assessments (e.g. better-than-the average effect, unrealistic optimism) are after all cases of adaptive behaviour and, therefore, of adaptive rationality.

    As far as I am aware, adaptive rationality theorists do not discuss these effects very often (but there are a few interesting papers: http://pss.sagepub.com/content/23/12/1515 ). Other researchers working on biased cognition and adaptive behavior have looked in more detail at the potential benefits these families of effects/biases could offer. For instance, when talking about overconfidence, Dominic Johnson (http://dominicdpjohnson.com/publications/articles.html) points to the a ‘lottery effect’: even though overconfidence might be lead to worse performance, overconfident people also get engaged more often in activities than unbiased people, therefore buying more lottery tickets in the competition for success.

    While it might be tempting to reinterpret such biases/flawed self-assessments as instances of adaptive behaviour, it seems to me that there are some problems with this move. In the main, as Dunning et al. (2004; https://faculty-gsb.stanford.edu/heath/documents/PSPI%20-%20Biased%20Self%20Views.pdf ) make clear, flawed self-assessments may not be that innocent or beneficial. In fact, these may also lead to undesirable life outcomes, for instance influencing people’s efforts to obtain health care or making people more likely to engage in high-risk sex.

    ReplyDelete
  3. Lisa and Andrea,
    since you started trading interesting links in the topic, I'll stop lurking and add a couple of my own. I expect you both to be well aware of the first, but probably not the second, so overall I guess the following is more for the benefit of interested readers.

    All right, the first link refers to:
    McKay, R. T., & Dennett, D. C. (2009). The evolution of misbelief. Behavioral and Brain Sciences, 32(06), 493-510.
    The whole issue (main article, all peer comments and final response) is available here: http://beyond-belief.org.uk/sites/beyond-belief.org.uk/files/The%20evolution%20of%20misbelief.pdf
    Well worth a read, in my opinion. The conclusion is that systematic misbeliefs may be positively selected when, to use Lisa's words, they are a case of "unrealistic optimism". In such circumstances, misbeliefs can be consistently adaptive. Rings a bell?

    Second, there is also an intriguing approach to formally model a very similar situation with the tools of game theory, the full ref is:
    Kaznatcheev, A., Montrey, M., & Shultz, T. R. (2014). Evolving useful delusions: Subjectively rational selfishness leads to objectively irrational cooperation. arXiv preprint arXiv:1405.0041. And can be found here: http://arxiv.org/abs/1405.0041
    Plenty of discussions are available on Kaznatcheev (et al) blog: http://egtheory.wordpress.com/
    In particular, you may want to start from https://egtheory.wordpress.com/2013/07/09/evolving-useful-delusions-to-promote-cooperation/ and perhaps https://egtheory.wordpress.com/2014/05/04/useful-delusions-interface-theory-of-perception-and-religion/
    What Kaznatcheev et al are showing is a specific case where over-optimistic "beliefs" are selected for in a formal simulation of competitive, spatially-constrained games (what interactions can occur is regulated by where the agents are). The effect of inclusive fitness lurks in there, I believe.

    Conclusion: according to the two approaches I'm considering, and at least on paper, over-optimistic beliefs can be adaptive. Whether this calls for a radical rethink of what we consider rational or not I can't say: all I see is that the picture we have is complicated, and I'm tempted to dismiss Rationality (with capital R) as yet another (scientific) fetish.

    ReplyDelete
  4. Thank you both, Andrea and Sergio!

    Yes, I'm aware of the excellent paper by McKay and Dennett, and rely heavily on it for my account of the epistemic benefits of motivated delusions: http://www.sciencedirect.com/science/article/pii/S1053810014001937

    I was not aware of the other source and I'll look it up!

    ReplyDelete
  5. Of course! So now you know I qualify as a "lazy lurker": haven't read your latest paper yet, but I certainly will.

    ReplyDelete
  6. Thank you both for the comments and the links!

    ReplyDelete

Comments are moderated.