Skip to main content

Trust Responsibly

This post is by Jakob Ohlhorst, who is a postdoc fellow on the Extreme Beliefs project at Vrije Universiteit Amsterdam. This post is about his recent book, Trust Responsibly (Routledge), which is available open access as an e-book.

Jakob Ohlhorst

"Strange coincidence, that every man whose skull has been opened had a brain!"

'Trust responsibly' opens with this joke from Ludwig Wittgenstein. In On Certainty, he argued that some things we can only trust to be the case because any evidence which speaks in favour of the things we trust must already presuppose the things we trust. That everyone has a brain was a better example in the 1950s than it is now. This goes beyond trust in people. It also involves trust that the world is older than 100 years, trust that you are not in a coma and dreaming, and so on. I argue in my book that – to trust responsibly – we need virtues.

The problem with trust is, if you don’t need any evidence, then you could trust just about anything to be the case. You might trust that astrology is a good way to learn about people or that aliens are causing catastrophes with lasers from Mars. How do we tell good cases of trust from bad cases of trust? Giving completely up on trust is not an option; we would end up in total scepticism and cognitive paralysis. We could not do anything cognitive, not doubt, not believe, nor investigate. So we must at least be somewhat warranted to trust in our fundamental presuppositions.




I argue that we are warranted to trust in presuppositions that enable us to exercise our epistemic virtues. I explain my view of epistemic virtues in more detail here on Imperfect Cognitions, but essentially, they are the psychological resources that enable us to discover and gain knowledge, communicate it, and solve problems. Our virtues would not work if we did not trust them to work. We are therefore warranted to trust our virtues.

You might think: but wait, how can we know which of our psychological resources we can actually trust? How do we recognise virtues? If we possess certain reflective virtues like conscientiousness that allow us to evaluate our own thinking, then we can recognise which virtues are trustworthy. I argue that we are warranted to trust virtues on two conditions. First, we must be aware of the operation of the psychological processes that support the virtue – but we do not need to know that they are virtuous. Second, if we had these reflective virtues that allow us to evaluate our own thinking, then we would recognise them as virtues. When these two conditions are satisfied, our trust in a virtue is responsible and warranted.

To illustrate this, consider a rabbit’s flight response. It is hyper-sensitive, it will detect danger where there is none, thus the flight response is no epistemic virtue. If – through some miracle – the rabbit acquired reflective virtues and started thinking about the response, it would realise that it is unreliable and hence stop trusting it. Therefore, the rabbit is not warranted to trust the response. Still, the rabbit has other simple virtues that it is warranted to trust, say its ability to recognise food.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph