Skip to main content

Good Guesses

This post is by Kevin Dorst and Matthew Mandelkern whose paper, "Good Guesses", is forthcoming in Philosophy and Phenomenological Research. The authors have written another post on guessing and the conjunction fallacy which you can read here.


Matthew Mandelkern



Where do you think Latif will go to law school? He’s been accepted to Yale, Harvard, Stanford, and NYU. We don’t know his preferences, but here’s the proportion of applicants with the same choices who’ve gone to each:

Yale, 38%; Harvard, 30%; Stanford, 20%; NYU, 12%.

So take a guess: Where do you think he’ll go?

Some observations: One natural guess is ‘Yale’. Another is ‘Either Yale or Harvard’; meanwhile, it’s decidedly unnatural to guess ‘not Yale’, or ‘Yale, Stanford, or NYU’.

Though robust, these judgments are puzzling. ‘Yale’ is a fine guess, but its probability is below 50%, meaning that its negation is strictly more probable (38% vs. 62%); nevertheless, ‘not Yale’ is a weird guess. Moreover, ‘Yale or Harvard’ is a fine guess—meaning that it’s okay to guess something other than the single most likely school—yet ‘Yale, Stanford, or NYU’ is a weird guess (why leave out ‘Harvard’?). This is so despite the fact that ‘Yale or Harvard’ is less probable than ‘Yale, Stanford, or NYU’ (68% vs. 70%).


Kevin Dorst


In this paper we generalize these patterns (following Holguín 2020) and develop an account that explains them. The idea is that guessers aim to optimize a tradeoff between accuracy and informativity—between saying something that’s likely to be true, and saying something that’s specific.

As William James (1897) famously pointed out, these goals directly compete: the more informative an answer is, the less probable it will be. Some people will put more weight on informativity, guessing something specific like ‘Yale’. Others will put more weight on accuracy, guessing something probable like ‘Yale, Harvard, or Stanford’.

 Neither of these guesses are mistakes; they’re just different ways of weighing accuracy against informativity. But on the way we spell this out, every permissible way of making this tradeoff will lead ‘not Yale’ and ‘Yale, Stanford, or NYU’ to be bad guesses. Why? In each case, there is an equally-informative but more probable answer: ‘Not NYU’ and ‘Yale, Harvard, or Stanford’, respectively.

Now consider a different question.

Linda is 31 years old, single, and very bright. As an undergraduate she majored in philosophy and was highly active in social-justice movements. Which of the following do you think is more likely?

1) Linda is a bank teller.

2) Linda is a bank teller and is active in the feminist movement.

Famously, Tversky and Kahneman (1983) found that most people choose (2) over (1). However, every way of (2) being true is a way of (1) being true, therefore it can’t be more likely! This is known as the conjunction fallacy: ranking a specific claim as more probable than a broader claim.

But notice: by the exact same token, every way in which ‘Yale’ would be a true guess is also a way in which ‘Yale, Stanford, or NYU’ would be true. Yet—for the reasons mentioned above—the former is a good guess, the latter is a weird one: sometimes a drive for informativity can make it reasonable to give an answer that’s less probable than some of the alternatives. Thus, perhaps, a preference to choose the conjunction (2) can be explained by the fact that it’s more informative than (1).

In this paper, we argue that this is so. We make the case that much of our reasoning under uncertainty involves negotiating an accuracy-informativity tradeoff, and that this helps to explain a variety of patterns in the things people tend to guess, believe, and assert.  

We then bring this tradeoff to bear on the conjunction fallacy. We argue that it helps to explain—and partially rationalize—a variety of subtle empirical effects that have been found in people’s tendency to commit this fallacy.

Upshot: maybe we weren’t dumb for thinking (guessing) that Linda is a feminist bank teller, after all. 

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph