In this post Andrea Polonioli interviews Ralph Hertwig (pictured below), director of the Center for Adaptive Rationality at the Max Planck Institute for Human Development in Berlin.
AP: According to popular accounts offered in the field of judgment and decision-making, people are prone to cognitive biases, and such biases are conducive to maladaptive behaviour. Based on your research, to what extent the claim that cognitive biases are costly is warranted by available evidence? If you had to identify one particular bias that is especially worrisome, because it typically results in negative real life outcomes, which one would this be?
RH: This is a hotly debated topic in research on behavioral decision making and beyond. Many cognitive biases have been defined as such because they violate coherence norms, under the assumption that a single syntactical rule such as consistency, transitivity, the conjunction rule, or Bayes’ rule suffices to evaluate behavior. I believe that such coherence-based norms are of limited value for evaluating behavior as rational. Specifically, we have argued that there is little evidence that coherence violations are costly, or that if they were, people would fail to learn to avoid them. Second, we have suggested that adaptive rules of behavior can in fact imply incoherence, and that computational intractability and conflicting goals can make coherence unattainable. Yet this does not mean that coherence is without value. I think coherence plays a key role in situations where it is instrumental in achieving functional goals, such as fairness and predictability. But I do not believe that coherence should be treated as a universal benchmark of rationality.
Instead, smart choices need to be defined in terms of ecological rationality, which requires an analysis of the environmental structure and its match with the available cognitive strategies. Of course, this does not mean that people do not make mistakes—but the issue is not whether a cognitive strategy is rational or irrational per se but rather under which environmental conditions a particular strategy works or fails to work. What could happen is that a strategy that used to function well in the past no longer works because the environment has changed. This can indeed lead to costly errors. Take, for instance, the strategy of trusting experts such as doctors. In a world in which doctors’ and patients’ interests were aligned, this was a good strategy. In a world in which their interests can, for various reasons (monetary or legal), be systematically at odds, this strategy will fail.
More on this topic can be found here:
Hertwig, R., Hoffrage, U., & the ABC Research Group (2013). Simple heuristics in a social world. New York: Oxford University Press.
AP: According to popular accounts offered in the field of judgment and decision-making, people are prone to cognitive biases, and such biases are conducive to maladaptive behaviour. Based on your research, to what extent the claim that cognitive biases are costly is warranted by available evidence? If you had to identify one particular bias that is especially worrisome, because it typically results in negative real life outcomes, which one would this be?
RH: This is a hotly debated topic in research on behavioral decision making and beyond. Many cognitive biases have been defined as such because they violate coherence norms, under the assumption that a single syntactical rule such as consistency, transitivity, the conjunction rule, or Bayes’ rule suffices to evaluate behavior. I believe that such coherence-based norms are of limited value for evaluating behavior as rational. Specifically, we have argued that there is little evidence that coherence violations are costly, or that if they were, people would fail to learn to avoid them. Second, we have suggested that adaptive rules of behavior can in fact imply incoherence, and that computational intractability and conflicting goals can make coherence unattainable. Yet this does not mean that coherence is without value. I think coherence plays a key role in situations where it is instrumental in achieving functional goals, such as fairness and predictability. But I do not believe that coherence should be treated as a universal benchmark of rationality.
Instead, smart choices need to be defined in terms of ecological rationality, which requires an analysis of the environmental structure and its match with the available cognitive strategies. Of course, this does not mean that people do not make mistakes—but the issue is not whether a cognitive strategy is rational or irrational per se but rather under which environmental conditions a particular strategy works or fails to work. What could happen is that a strategy that used to function well in the past no longer works because the environment has changed. This can indeed lead to costly errors. Take, for instance, the strategy of trusting experts such as doctors. In a world in which doctors’ and patients’ interests were aligned, this was a good strategy. In a world in which their interests can, for various reasons (monetary or legal), be systematically at odds, this strategy will fail.
More on this topic can be found here:
Hertwig, R., Hoffrage, U., & the ABC Research Group (2013). Simple heuristics in a social world. New York: Oxford University Press.
Arkes, H. R., Gigerenzer, G., & Hertwig, R. (2016). How bad is incoherence? Decision, 3, 20-39. doi:10.1037/dec0000043
AP: In the history of thought it has often been claimed that pursuit of knowledge is what makes life meaningful. One fascinating line of your research has focused on cases of deliberate ignorance, where people choose instead not to know. Do you see any functions and value in people’s deliberate ignorance?
RH: Indeed, we have argued that what could be described as “deliberate ignorance” has a wide range of functions, such as regulating emotion, avoiding regret, maximizing suspense and surprise, enhancing performance, enabling the pursuit of strategic goals, managing information, and even ensuring impartiality and fairness. Is deliberate ignorance desirable for the individual and for society?
There is no ready-made answer to the question of when deliberate ignorance is beneficial, rational, or ethically appropriate. In my view, each function and each class of instances must be assessed on its own merits. Some variants of strategic ignorance can be modeled as the rational behavior of a utility-maximizing agent. But deliberate ignorance can also have a sinister side, such as when it is used to evade responsibility, escape liability, or defend anti-intellectualism. I would love for philosophers to get engaged in analyzing the ethical and rationality implication of deliberate rationality.
This article is a good starting point for those wanting to find out more about deliberate ignorance: Hertwig, R., & Engel, C. (2016). Homo ignorans: Deliberately choosing not to know. Perspectives on Psychological Science, 11, 359-372. doi:10.1177/1745691616635594
AP: In a recent paper of yours, you discuss studies showing that clinical populations can show higher conformity to principles of rationality than non-clinical ones. What normative implications do these findings have? Can they tell us anything about the value of norms of rationality accepted in psychology and economics?
RH: Yes, this is another fascinating subject. In light of the observation that abnormality can be conducive to rationality, one may well ask how sane various benchmarks of rationality really are. One may, for instance, argue that the essence of the autistic personality, vividly described by writers such as Oliver Sacks, is not unlike the rational models of decision making embraced by many economists and some psychologists. Specifically, the nature of human rationality becomes “autistic” if, as in many economic analyses, we reduce “rationality” to purely self-regarding preferences and individuals’ ability to form consistent probabilistic beliefs. This impoverished notion of rationality assumes that people are exclusively self-regarding (with no positive or negative concern for the welfare of others) and ignores the importance of reciprocal fairness in the fabric of human society.
For more on this topic see: Hertwig, R., & Herzog, S. M. (2009). Fast and frugal heuristics: Tools of social rationality. Social Cognition, 27, 661-698. doi:10.1521/soco.2009.27.5.661
Hertwig, R., & Volz, K. G. (2013). Abnormality, rationality, and sanity. Trends in Cognitive Sciences, 17, 547-549. doi:10.1016/j.tics.2013.08.011
AP: A pivotal debate in psychology centres on the role of reasoning in making decisions. In your research you emphasize the adaptive value of fast and frugal decision-making, but do you think that there are any advantages in the sorts of confabulations and post hoc rationalisations that people often seem to engage with when attempting to justify their decision-making?
RH: Again, this will depend on the social and environmental conditions under which decisions are made. Of course, there will be circumstances under which people need to account for their decisions and their potentially disadvantageous outcomes. In such cases, it may be unwise to argue that one did not take all the available information into account but focused instead on one or few good reasons, cues, or arguments. In other words, not only the decision making process itself is important but sometimes it may even matter more how we reconstruct and defend our decision making in the face of criticism and when accountability becomes an issue. For instance, people testifying in court or reporting to their boss may be better off arguing that they integrated all available information into their decisions. At the same time, situations exist in which the decision-making process needs to be as transparent and replicable as possible (e.g., medical diagnostics). In such worlds, fast and frugal decision making will not only be valuable during the decision-making process but also subsequently, to render the process accessible and transparent to others.
For this and similar issues, this source may be of help: Hertwig, R., Hoffrage, U., & the ABC Research Group (2013). Simple heuristics in a social world. New York: Oxford University Press.
AP: In the history of thought it has often been claimed that pursuit of knowledge is what makes life meaningful. One fascinating line of your research has focused on cases of deliberate ignorance, where people choose instead not to know. Do you see any functions and value in people’s deliberate ignorance?
RH: Indeed, we have argued that what could be described as “deliberate ignorance” has a wide range of functions, such as regulating emotion, avoiding regret, maximizing suspense and surprise, enhancing performance, enabling the pursuit of strategic goals, managing information, and even ensuring impartiality and fairness. Is deliberate ignorance desirable for the individual and for society?
There is no ready-made answer to the question of when deliberate ignorance is beneficial, rational, or ethically appropriate. In my view, each function and each class of instances must be assessed on its own merits. Some variants of strategic ignorance can be modeled as the rational behavior of a utility-maximizing agent. But deliberate ignorance can also have a sinister side, such as when it is used to evade responsibility, escape liability, or defend anti-intellectualism. I would love for philosophers to get engaged in analyzing the ethical and rationality implication of deliberate rationality.
This article is a good starting point for those wanting to find out more about deliberate ignorance: Hertwig, R., & Engel, C. (2016). Homo ignorans: Deliberately choosing not to know. Perspectives on Psychological Science, 11, 359-372. doi:10.1177/1745691616635594
AP: In a recent paper of yours, you discuss studies showing that clinical populations can show higher conformity to principles of rationality than non-clinical ones. What normative implications do these findings have? Can they tell us anything about the value of norms of rationality accepted in psychology and economics?
RH: Yes, this is another fascinating subject. In light of the observation that abnormality can be conducive to rationality, one may well ask how sane various benchmarks of rationality really are. One may, for instance, argue that the essence of the autistic personality, vividly described by writers such as Oliver Sacks, is not unlike the rational models of decision making embraced by many economists and some psychologists. Specifically, the nature of human rationality becomes “autistic” if, as in many economic analyses, we reduce “rationality” to purely self-regarding preferences and individuals’ ability to form consistent probabilistic beliefs. This impoverished notion of rationality assumes that people are exclusively self-regarding (with no positive or negative concern for the welfare of others) and ignores the importance of reciprocal fairness in the fabric of human society.
For more on this topic see: Hertwig, R., & Herzog, S. M. (2009). Fast and frugal heuristics: Tools of social rationality. Social Cognition, 27, 661-698. doi:10.1521/soco.2009.27.5.661
Hertwig, R., & Volz, K. G. (2013). Abnormality, rationality, and sanity. Trends in Cognitive Sciences, 17, 547-549. doi:10.1016/j.tics.2013.08.011
AP: A pivotal debate in psychology centres on the role of reasoning in making decisions. In your research you emphasize the adaptive value of fast and frugal decision-making, but do you think that there are any advantages in the sorts of confabulations and post hoc rationalisations that people often seem to engage with when attempting to justify their decision-making?
RH: Again, this will depend on the social and environmental conditions under which decisions are made. Of course, there will be circumstances under which people need to account for their decisions and their potentially disadvantageous outcomes. In such cases, it may be unwise to argue that one did not take all the available information into account but focused instead on one or few good reasons, cues, or arguments. In other words, not only the decision making process itself is important but sometimes it may even matter more how we reconstruct and defend our decision making in the face of criticism and when accountability becomes an issue. For instance, people testifying in court or reporting to their boss may be better off arguing that they integrated all available information into their decisions. At the same time, situations exist in which the decision-making process needs to be as transparent and replicable as possible (e.g., medical diagnostics). In such worlds, fast and frugal decision making will not only be valuable during the decision-making process but also subsequently, to render the process accessible and transparent to others.
For this and similar issues, this source may be of help: Hertwig, R., Hoffrage, U., & the ABC Research Group (2013). Simple heuristics in a social world. New York: Oxford University Press.