In this post I interview Richard Pettigrew (in the picture above), who is Professor in the Department of Philosophy at the University of Bristol, and is leading a four year project entitled “Epistemic Utility Theory: Foundations and Applications”, also featuring Jason Konek, Ben Levinstein, Chris Burr and Pavel Janda. Ben Levinstein left the project in February to join the Future of Humanity Institute in Oxford. Jason Konek left the project in August to take up a TT post at Kansas State University. They have been replaced by Patricia Rich (PhD, Carnegie Mellon) and Greg Gandenberger (PhD, Pitt; postdoc at LMU Munich).
LB: When did you first become interested in the notion of epistemic utility? What inspired you to explore its foundations and applications as part of an ERC-funded project?
RP: It all started in my Masters year, when I read Jim Joyce's fantastic paper 'A Nonpragmatic Vindication of Probabilism' (Philosophy of Science, 1998, 65 (4):575-603). In that paper, Joyce wished to justify the principle known as Probabilism. This is a principle that is intended to govern your credences or degrees of belief or partial beliefs. Probabilism says that your credences or degrees of belief should obey the axioms of probability calculus. Joyce notes that there already exist justifications for that principle, but they all appeal to the allegedly bad pragmatic consequences of violating it -- if your credences violate Probabilism, these arguments show, they'll lead you to make decisions that are guaranteed to be bad in some way.
The Dutch Book argument, as put forward by Frank Ramsey and Bruno de Finetti, is the classic argument of this sort. As his title suggests, Joyce seeks a justification that doesn't appeal to the pragmatic problems that arise from non-probabilistic credences. He's interested in identifying what is wrong with them from a purely epistemic point of view. After all, suppose I violate Probabilism because I believe that it's going to rain more strongly than I believe that it will rain or snow. It may well be true that these credences will have bad pragmatic consequences -- they may well lead me to buy a £1 bet that it will rain for more than I will sell a £1 bet that it will rain or snow, for instance, and that will lead to a sure loss for me.
So if I believe that it will rain more strongly than you, and it does rain, then I will be more accurate than you because I have a higher credence in a truth. Joyce then proves a startling result: he shows that, if you have credences that violate Probabilism -- that is, if your credences do not satisfy the axioms of the probability calculus -- then there are alternative credences in the same propositions that are guaranteed to be more accurate than your credences are; that is, however the world turns out, these alternative credences will be more accurate; that is, you know a priori that those credences outperform yours from the point of view of accuracy. From this, he infers that such credences must be irrational. This is his nonpragmatic vindication of Probabilism.
Another appealing feature of the argument, and the one that launched the project I'm currently exploring, is that it suggests a way in which we might argue for other credal principles. Joyce's argument essentially has two premises. The first is an account of epistemic utility: the epistemic utility of a credence is its accuracy. The second is a principle of decision theory: it is the dominance principle, which says that it's irrational to pick an option when there is an alternative option that is guaranteed to have greater utility -- that is, it is irrational to pick a dominated option. Using a mathematical theorem -- which shows that the options that aren't dominated are precisely the probabilistic sets of credences -- he derives Probabilism from these two premises. But the dominance principle is just one of many decision principles. Thus, a natural question emerges: which principles for credences follow from the other decision principles?
LB: One of the goals of the project is to provide justification for epistemic norms within the framework of epistemic utility theory. Which norms has your team looked at so far?
RP: That's right. As I said above, it's natural to see Joyce as presenting a particular instance of a general argument form that we might deploy to establish various principles for credences. And that's a large part of what we've been doing on the project. We've looked at Bayesian conditionalization, for instance. To take an example, this says that, after learning some new evidence, an agent's credence that it will rain should be her prior credence that it will rain conditional on the evidence she gains. That is, posterior credences should be given by prior credences conditional on evidence obtained.
Hilary Greaves and David Wallace gave an epistemic utility argument for this principle in their 2006 paper 'Justifying Conditionalization' (Mind, 115(459): 607-632). They appealed to the decision principle that exhorts you to choose by maximising your expected utility. Thus, they show that Bayesian conditionalization is the updating rule that maximises expected accuracy. I then gave an argument for that updating rule that turns things around the other way (this is in my new book, Accuracy and the Laws of Credence, which is under contract with OUP at the moment). I showed that, if you update by any other rule, then there are some alternative prior credences -- alternative to your actual prior credences -- that all of your possible future credences will expect to be better than they will expect your actual prior credences to be. That is, if you plan to update other than by conditionalization, you plan to adopt new credences that are guaranteed to judge your current credences to have been suboptimal.
Recently, I've also worked with Rachael Briggs on her new argument that an agent who fails to update by conditionalization is accuracy-dominated: that is, if you calculate the combined accuracy of her current credences and her updated ones, then if she plans to set her updated ones by some rule other than conditionalization, there will be alternative current credences and updated ones that are guaranteed to be more accurate than hers.
We've also looked at some of the so-called deference principles, such as the Principal Principle (which says how an agent ought to defer to the objective chances when setting her credences) and the Reflection Principle (which says how an agent ought to defer to her future credences when setting her current credences). The Principal Principle says, for instance, that your credence that it will rain conditional on the objective chance of rain being 70% should be 0.7. The Reflection Principle says, for instance, that your credence that it will rain conditional on your future credence in rain being 0.7 should be 0.7. In the case of the Principal Principle, I showed that, if you violate it, then there are alternative credences that are guaranteed to have higher objective expected accuracy; that is, however the objective chances in fact turn out to be, they will expect those alternative credences to be more accurate than they expect your Principal Principle-violating credences to be (‘Accuracy, Chance, and the Principal Principle’, 2012, Philosophical Review 121(2): 241-275; 'A New Epistemic Utility Argument for the Principal Principle', 2013, Episteme, 10(1): 19-35).
This principle is often identified as a risk-averse decision principle because it gives so much weight to the worst-case scenario. It turns out that the credences whose worst-case scenario is best are the credences mandated by the Principle of Indifference. If you choose any other credences, their worst-case scenario will be worse than the worst-case scenario of the indifferent credences.
Project members have also been working on principles outside these core components of Bayesian epistemology. Jason Konek has been considering whether we might use epistemic utility considerations to investigate Sarah Moss' notion of probabilistic knowledge ('Probabilistic Knowledge and Cognitive Ability'). Moss claims that just as full beliefs can count as knowledge, so can partial beliefs or credences. Konek then asks what sort of credences might count as being known in this sense. He particularly investigates the suggestion that the epistemic success of known credences must be explained entirely by appeal to the agent's cognitive ability. And he gives an intriguing condition for this, which is interestingly in tension with the Principle of Indifference.
Konek has also investigated epistemic utility theory for imprecise credences ('Epistemic Conservativity and Imprecise Credence', forthcoming, Philosophy and Phenomenological Research). So far, I've been assuming that credences are sharp or precise -- that is, they are given by a single numerical value. But mainly epistemologists think that this is too demanding of an agent; or they think that such sharp credences fails to reflect the indeterminateness of the sort of evidence that we often acquire. My evidence concerning whether Labour will win the next general election is so complex and indeterminate that many philosophers feel that a sharp credence is an inappropriate response to it. And they propose imprecise credences instead. Konek has considered what happens if we try to measure the accuracy of these imprecise credences.
Ben Levinstein has been looking at the principles that govern peer disagreement and asking whether the epistemic utility framework can shed some light on the debates here. He's taken a novel approach: instead of focussing only on what agents should do in particular cases of peer disagreement, he also considers what sorts of strategies they should deploy in order to optimise the long-run accuracy of their credences ('With All Due Respect', forthcoming, Philosophers' Imprint).
Levinstein has also used accuracy techniques to challenge the orthodox account of how questions of peer disagreement interact with questions about the permissiveness of rationality ('Permissive Rationality and Sensitivity'). Perhaps your evidence is sufficiently unspecific that there is more than one set of credences that it is rationally permissible for you to have in response to it. Suppose you have one set of these credences and I have another. When I learn this of us, is it rational for me to stick with the credences I have? Most people say that it is. After all, before I learned that you had those credences, I knew that they were rational. How could it make any difference for me to learn that there is someone who actually has those credences? But Levinstein shows that this is not the case. If I learn that you have those credences, I should move my credences closer to them.
RP: Although the project doesn't have a large empirical component, many of the results may still be used in explanations of actual reasoning that agents carry out. After all, though we know that we don't execute it perfectly in many situations, there is evidence that the brain performs many of its cognitive processes using Bayesian reasoning -- this is the contention of the Bayesian brain hypothesis that is currently receiving a great deal of attention in cognitive science. But this raises the question: why? What advantage does Bayesian reasoning provide? Why would we have evolved to update by Bayesian conditionalization, for instance? Accuracy-based arguments can help with that -- they show that Bayesian reasoning is optimal in a particular way.
However, the most substantial implications outside formal epistemology are for statistics and artificial intelligence. Since the project is mainly concerned with identifying what reasoning is optimal in particular situations, it provides a foundation for the sorts of reasoning that statisticians routinely use; and it helps those who are designing the cognitive processes of artificial cognitive agents.
LB: In our own project (PERFECT) we are interested in the epistemic benefits of cognitions that are either false or irrational, from those that are considered as symptoms of mental disorders (e.g., delusions) to those that are strikingly common in the non-clinical population (e.g., self-enhancing beliefs). Can epistemic utility theory make sense of epistemically faulty cognitions having some epistemic benefits?
RP: I think it can. One topic that fascinates me is how we might use the techniques of epistemic utility theory to understand non-ideal agents. So far, we've been considering agents who can be in any credal state they please. But it might be more realistic to consider agents who can only be in certain credence functions -- those that are computationally tractable, for instance. Can we then use accuracy measures to understand how they should reason?
One of the most appealing features of epistemic utility theory is just how readily it can be applied to new situations and philosophical questions. If you're interested in bounded rationality, you just specify the bounds and then ask what reasoning is optimal within those bounds. If you're interested in how to update in the light of some particular sort of evidence that doesn't teach you a proposition with certainty (so conditionalization does not apply), you just specify the constraints that the evidence places on your posterior credences and you then ask what posteriors optimise accuracy within those constraints. It's an extremely general framework that has the potential to answer a broad range of epistemological questions. And it has a very straightforward structure: you specify the epistemic situation; then you calculate what optimises accuracy in that situation.
The Epistemic Utility project is also on Facebook.
The Epistemic Utility project is also on Facebook.