Skip to main content

Refining our Understanding of Choice Blindness

Robert Davies
This post is by Robert Davies, a PhD student at the University of York. Robert is interested in self-knowledge and memory, and particularly how the study of memory can shed light on philosophical problems in self-knowledge. 

Here is one variety of introspective failure: I make a choice but, when providing reasons, I offer reasons that could not be my reasons for that choice. Choice Blindness research by Lars Hall, Petter Johansson, and their colleagues (2005–) suggests it is surprisingly prevalent (see e.g. Johansson et al. 2008), showing a low rate of manipulation detection and a high degree of willingness, in non-clinical participants, to offer confabulatory explanations for manipulated choices across a range of modalities and environments (see e.g. Hall et al. 2006; Hall et al. 2010).

We see ourselves as introspectively competent, rational decision-makers—capable of knowing our reasons, weighing them as reasons, and self-regulating when required—but since widespread confabulation seems at odds with this, some reconciliation with the data is required.

Some preferences are subject to shifting attention or mood, so caprice does not always elicit criticism. I can feel pulled now to tarte aux fraises and now to cheesecake, and I can attest to the virtues of both. Since liking and preferring are related attitudes, we might borrow factors in favour of one when answering questions about the other, especially if a preference is marginal, a choice is forced, or we are unexpectedly asked to articulate deciding factors in our selection. But moral choices are not like dessert choices—I do not think lying is fine because I fancy a change—and Choice Blindness has been detected in those too (see Hall et al. 2012).

Two hypotheses have been considered to explain the data (see Lopes 2014). On one, ‘confabulation is the norm’; ‘choices are not based on the reasons we give’. On the other, our reasons accord with eventual, not original, preferences and although participants do not confabulate, our attitudes are ‘fickle’. Both hypotheses do damage to our ‘conception of rational decision making’ (pp. 29f.).

Lopes favours the second hypothesis, suggesting that reason-formulating and reason-stating are systematically distorting of attitudes (p. 33). In trying to formulate, or state, reasons for our pre-critical attitudes we distort them and report instead on the eventual choice. Introspective competence is preserved for ‘original’ preferences as long as we resist the temptation to state or formulate reasons for them.

As a prospective model of Choice Blindness the explanation faces a number of issues: (i) it is a better story for some attitudes than for others—an absence of reason-formulating and reason-stating in moral choices looks less plausible than in some cases; (ii) a working notion of confabulation is required on which to ground the claim that participants do not confabulate on the second hypothesis (on some notions, confabulation may occur on both hypotheses); (iii) further support is required for the claim that reason-formulating and reason-stating are systematically distorting of attitudes. The conclusions of relevant studies fall short in this regard (see pp. 30–1), and some forms of reasoning could not be systematically distorting while performing the role for which they are deployed; (iv) explaining the data does not demand either hypothesis. We might take the data to show that in the majority of cases, non-clinical participants (and so, perhaps, the wider population) willingly provide provably false statements about the reasons for their choices when queried, and when failing to detect manipulation. But a sizable minority does detect the manipulation and some utter statements true of their original choice, not the choice revealed in manipulations (p. 29). These data are not irrelevant to the construction of a hypothesis, but neither of the two on offer clearly explain what participants are doing—more naturally, what they are doing right—in these cases.

Reflecting on the confabulation literature may help with two of these issues. On a view still in currency there is link between a failure, or gap, in memory, and a tendency to confabulate (Bonhoeffer 1901; Berlyne 1972; Moscovitch and Melo 1997; McVittie 2014). On this view, it is possible that participants confabulate on both first and second hypotheses, but the view helps to explain why some participants do not exhibit introspective failure: if there is no gap in, or distortion of, memory with regards to the salient features of one’s choice, one will not offer false statements in support of the revealed choice.

Refining our understanding of Choice Blindness on this view becomes a matter of understanding why one’s recall of salient features gives way to another process; one that draws upon elements of the current environment for the purposes of explanation. And this investigation can proceed without more general conclusions about introspective competence and rationality.


Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...