Skip to main content

Deliberation, Interpretation, and Confabulation (1)


This is a report from the first day of the Deliberation, Interpretation and Confabulation Workshop at the Abraham Kuyper Centre for Science and Religion, VU University in Amsterdam, organised by Naomi Kloosterboer, and held on 19 and 20 June 2015. Note about the workshop poster above: circles are confabulation, squares are deliberation, and triangles are interpretation (how amazingly clever is that! Thanks to Naomi for pointing this out to me).

I (Lisa Bortolotti) was the first speaker. I talked about features of confabulatory explanations about our own attitudes and choices, and attempted to offer an account of what happens when we confabulate that makes sense of several results in experimental psychology (such as introspective effects, social intuitionism about moral judgements, choice blindness). I argued that people often ignore the factors causally responsible for the formation of their attitudes and the making of their choices; they produce an often ill-grounded claim about what caused their attitudes and choices; and in the process of giving such reasons they commit to other claims that can also be ill-grounded.

In line with the scope and interest of project PERFECT I looked at the costs and benefits of confabulatory explanations. I argued that ignorance of causal factors is often faultless and that ill-grounded causal claims can be both beneficial and inevitable, but the ill-groundedness of the claims that we commit to in the process of confabulating is an instance of irrationality that can and should be avoided. But when the claims generated in this way are constrained by evidence, then confabulation is the beginning of something good (and maybe all instances of deliberation start with a confabulatory explanation).


Naomi Kloosterboer (VU Amsterdam), pictured above, was the second speaker. She asked whether Moran's account of self-knowledge is too rationalistic, especially when applied to emotions. Moran argues that I acquire knowledge of my belief that p by making up my mind whether p, and he argues that this is true not only of belief but of all other mental attitudes. Naomi went onto examine Finkelstein's interpretation of the Transparency Claim which is at the core of Moran's view of self-knowledge. The Transparency Claim is that when asked "Do I believe p?" I can answer this question by considerations in favour of p itself.

Finkelstein believes that Moran is committed to a rationality assumption: "I'm entitled to assume that the attitude I in fact have is the one that, by my lights, the reasons call for me to have." But this does not seem to apply easily to common examples. One example of Finkelstein's is David who is fond of his dog Sadie. There is no specific answer to the question whether David is rationally required to be fond of Sadie. And the question is harder to answer than the original question whether David is fond of Sadie.

Naomi criticised Finkelstein's interpretation of Moran: the rationality assumption does not capture what is special about the Transparency Claim. If I want to know whether I am scared of the snake, I need to ask whether the snake is dangerous, but this is only part of the story. We can make judgements about everything but we only have emotions about things that matter to us. Naomi believes that Moran's approach does not capture the fact that emotions are responses to things that are of our concerns. Naomi thought that it makes sense to endorse a rationality assumption, but she revised the assumption as such: "In general, mental attitudes are judgement-sensitive". But this applies to some attitudes (not all, for instance not recalcitrant emotions).

Tillman Vierkant (Edinburgh) commented on Naomi's talk and, focusing on explicit racism and the deliberative stance, argued that focusing on reasons available to us are not the best way to determine what we think or feel.


Fleur Jongepier (Radboud University Nijmegen), pictured above, addressed first-person authority and the distinction between agential authority and authority as prediction of behaviour. Krista Lawlor (2003) argues that authoring one's attitudes is not sufficient to have first-person authority given the psychological evidence on dating couples. In Seligman (1980) dating couples are asked to give intrinsic or extrinsic reasons for being in their relationships. People asked to think about extrinsic reasons are much less likely to think that the relationship is successful and it will last.

But such attitudes are not predictive of behaviour and we hear them as "lacking authority". Lawlor works with a predictivist account of authority where people have authority only if their self-ascriptions have predictive power (are likely to be consistent with their future doings and sayings). How do we disentangle this from the agential view. Maybe the agential view could be about how we gain self-knowledge and the predictivist view could be about the criteria for being authoritative. But this does not seem to be the right way to keep the views apart. Other features could be mentioned in an attempt to characterise the difference, but they make each view of first-person authority unsatisfactory.

Luca Ferrero and Lisa Bortolotti have argued against Lawlor that the empirical evidence does not threaten the agential authority view. The evidence does not suggest that the person does not know what she believes, but that the person is not likely to behave in the future in a way that is consistent with the reported belief. And this is not a problem for agential authority. But Fleur expressed concern that correct self-ascriptions are not sufficient for first-person authority when they sound "empty and hollow" as in the dating couples study, and proposed an integrative framework where elements of both views are included.

Jeroen de Ridder (VU Amsterdam) offered a commentary on Fleur's paper and revisited her assessment of the agential and predictivist views of first-person authority.


Quassim Cassam (Warwick University), pictured above, talked about the deliberative route to self-knowledge. How do we know why we believe that p? Can we do so by deliberation? One view according to which people have self-knowledge via deliberation: "successful deliberation gives us knowledge of what we believe and how we believe it" (Matthew Boyle). Quassim was interested in the role of epistemic vices in providing reasons for 'crazy' beliefs (e.g. conspiracy theories that can be perfectly coherent but are wildly implausible). Vices are conformity, negligence, prejudice, closed-mindedness, gullibility, etc.

To explain a belief on the basis of linkages to other beliefs is an epistemic explanation. To explain a belief on the basis of intellectual character traits is not an epistemic explanation. Intellectual vices are character traits that impede effective and responsible intellectual inquiry. Inquiry is the attempt to find things out, to extend or refine knowledge. Quassim suggested that epistemic vices are at least part of the explanation for 'crazy' beliefs, and argued against epistemic situationism (that is, the view that epistemic character plays no role in forming 'crazy' beliefs). The conspiracy theorist is not a confabulator rather is a person with epistemic character deficits.

But if we believe that epistemic character traits are at least partially responsible for 'crazy' beliefs, then we need to account for real-life conspiracy theorists who exhibit epistemic vices in some contexts but are very effective and responsible reasoners in other contexts. Are character traits 'local' and context-dependent? It would seem so (the same can be said for moral character traits as one can helpful on some occasions and not on others).

(report to be continued...)

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...