Skip to main content

Epistemic Consequentialism: Interview with Kristoffer Ahlstrom-Vij


In this post we hear about a project on problems and prospects for epistemic consequentialism whose principal investigators are Kristoffer Ahlstrom-Vij (University of Kent), in the picture above, and Jeff Dunn (DePauw University). The project is funded by the Leverhulme Trust and running from August 2014 to July 2016. So far, one paper has been published as part of the project—‘A Defence of Epistemic Consequentialism’, Philosophical Quarterly 64 (257), 2014.

An edited volume entitled Epistemic Consequentialism is due to be published at the end of 2016 or early 2017 by Oxford University Press. It will feature papers by Clayton Littlejohn, Christopher Meacham, Michael Caie, Nancy Snow, Richard Pettigrew, Ralph Wedgewood, James Joyce, Hilary Kornblith, Julia Driver, Amanda MacAskill, Alejandro Perez Carballo, and Sophie Horowitz. The plan is to publish two other journal articles as part of the project.

Kristoffer has kindly agreed to answer a few questions.

LB: How did you first become interested in epistemic consequentialism?

KA-V: I became interested in epistemic consequentialism as a result of thinking about epistemic value. It’s a bit difficult to define ‘epistemic value’ without taking a stand on substantial and controversial questions, but one historically popular view is that things—including beliefs, character traits, and social arrangements—are epistemically valuable in so far as they enable people to form true beliefs.

In fact, I’ve argued that true belief is unique in being of intrinsic epistemic value, i.e., the type of epistemic value possessed independently of what else it might get you (see, e.g., my ‘In Defense of Veritistic Value Monism’, Pacific Philosophical Quarterly 94, 2013). So, for example, while having justification for your beliefs might be valuable because it makes it more likely that your beliefs will be true, and as such is of (mere) instrumental epistemic value, having a true belief is epistemically valuable in and of itself. But here’s the thing: being told that something is valuable, even intrinsically valuable, doesn’t in itself tell you anything about what you should believe.

To say anything about what we should believe, we need a normative theory that takes us from what’s good to believe to what we should believe. Epistemic consequentialism is a family of such normative views. For example, on a very simple version of such consequentialism, analogous to classical utilitarianism in ethics, we might say that we should believe in such a way as to maximise the good. As it happens, I don’t think that’s a very plausible version of epistemic consequentialism, but it illustrates nicely the general idea behind epistemically consequentialist views.


LB: What do you see as the main advantages and disadvantages of the view?

KA-V: One advantage was mentioned already in the above: it connects in a very straightforward way questions about what’s good to believe with questions about what we should believe. Another benefit is that it generates novel and interesting—and sometimes revisionary—suggestions for how we should to go about our epistemic business. 

Here’s an example: we tend to think that reflecting on our beliefs and their merits is invariably a good thing. However, we know from empirical psychology that our reflective capacities exhibit a variety of self-serving biases, many of which can be traced back to our very general and robust tendency for overconfidence. As a result, reflection often fails to make us better off, and sometimes even makes us worse off than we otherwise would have been, as far as getting to the truth is concerned. On many versions of epistemic consequentialist, the lesson to draw from this is straightforward: we need to re-evaluate our rosy picture of reflection. In many cases, it might be that we should be reflecting far less on our beliefs than philosophers have thought.

This connects with what some have considered a disadvantage of the view: what I’ve described as an interestingly revisionary aspect of epistemic consequentialism—that, for what it’s worth, certainly is consistent with the revisionary spirit of many consequentialist views in ethics—some would describe as a recipe for ‘bullet-biting’. To some extent, this comes down to our view on the role of intuitions in philosophical theorising. Some people feel that what seems intuitively plausible should weigh heavily in philosophical inquiry. Those people will be sceptical about the type of revisions that epistemically consequentialist views will sometimes invite.

LB: You recently organised a two-day conference on prospects and problems of epistemic consequentialism at the University of Kent in Canterbury. How have the experts' contributions informed your project? Are you planning other similar events as part of the project?

KA-V: We’ve organised two events as part of the project: a one-day workshop at LSE in November 2014, and then the two-day conference at Kent in June 2015 that you mention. One thing that has been particularly valuable about the events is the opportunities they have offered for interaction between epistemologists and ethicists. Both are interested in goods—epistemic and moral, respectively—and how goods relate to normative questions about what we ought to believe, how to ought to act, and so forth. Still, not until quite recently has there been serious interaction between the two fields, and it has been interesting to have the project play a small part in facilitating such interactions that I think benefit epistemology and ethics alike.

LB: In our own projects (PERFECT and Costs and Benefits of Optimism) we are interested in the potential epistemic benefits of those cognitions that are epistemically problematic (e.g., obviously false or badly supported by evidence). We find the consequentialist framework useful in this respect. Do you see it as an advantage of consequentialism that it can explain how a belief that is false and unjustified can be conducive to the acquisition of true beliefs or knowledge? For instance, an excessively optimistic belief about one's self worth could make a positive epistemic contribution if more confident agents are disposed to interact with other agents in a more epistemically productive way.

KA-V: This is a really interesting issue, and it gets to the heart of some of the most controversial features of epistemic consequentialist views. Earlier, I talked about the relationship between ‘good’ and ‘should’. Another way to talk about what we should believe is in terms of what it’s right to believe. So, say that, if I form a belief that’s badly supported by the evidence, I will form tons of true beliefs. If true belief is an epistemic good, it would be (very) good for me to form the belief in question. But would it be right for me to do so? That depends on what type of consequentialist you are. You might take a view that’s a straightforward analogue of act utilitarianism, and say that it’s right to form a belief in so far as it has good (maybe maximally good) consequences. On that view, forming the relevant belief might not only be good but also right.

But say you’re a reliabilist, and as such take justification (i.e., the epistemic ‘right’) to be defined roughly as follows: a belief is justified if and only if it issues from a reliable belief-forming process. On that view, it might not be right to form the belief in question, since going against your evidence is, presumably, an unreliable way to form beliefs. So, if you’re a reliabilist, you might say that it would be good, but not right, to form the relevant belief—which I think nicely captures the feeling that there’s something good about forming the belief, but also something bad (or perhaps wrong) about so doing.

Of course, this raises really interesting issues in social epistemology. Think about it this way: reliabilism is a theory about what justification is; as such, it doesn’t come with an imperative to the effect that we should (say) maximize the number of justified beliefs. So it might be open to a reliabilist to say that, on a social level, it sometimes makes sense—as in: it would be good—to have people form unjustified belief. This might be for the reasons you highlight: maybe slightly overoptimistic, and as such unjustified (and false) beliefs about your own capabilities will make for a greater amount of social epistemic good, than a situation in which everyone has completely well-founded views about their own potential. It seems to me that epistemic consequentialism provides an excellent framework for thinking about these types of issues in a systematic and constructive manner.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...