Skip to main content

Confabulation and Introspection

Today's post is by Adam Andreotta. He earned his PhD from the University of Western Australia in 2018. His research and teaching interests include: epistemology, self-knowledge, the philosophy of David Hume and the philosophy of artificial intelligence. Here, he introduces his article, "Confabulation does not undermine introspection for propositional attitudes", that has recently appeared in the journal Synthese. For more of his work, see his PhilPapers profile.




Most of us think there exists an asymmetry between the way we know our own minds, and the way we know the minds of others. For example, it seems that I can know that I intend to watch Back to the Future, or that I believe that Australia will win the Ashes, by introspection: a private and secure way of knowing my own mental states. If I want to know whether my friend intends to see Back to the Future or believes that Australia will win the Ashes, I need to ask them or observe their behaviour.

This common-sense way of thinking about the mind is accepted by many philosophers, who have sought to explain the nature of this asymmetry. Other philosophers, however, deny that there is an asymmetry at all. They argue that the way we know our own beliefs, intentions, and desires (mental states philosophers call ‘propositional attitudes’) is no different in principle from the way that we know the beliefs, intentions, and desires of other people.

In “Confabulation does not undermine introspection for propositional attitudes”, I consider such scepticism. Specifically, I consider the work of Peter Carruthers, who thinks that the confabulation data—the data from choice blindness experiments, priming experiments, and experiments on split-brain patients—show that we cannot introspect our propositional attitudes.

Why think that the confabulation data warrant such a radical conclusion? Carruthers thinks that the data reveal two key findings. First, he claims that the data show that we make mistakes from time to time in our self-ascriptions—that is, we say that we have a certain intention, or belief, when we don’t. And second, he claims these mistakes are not random—that is, there are specific patterns found in the errors we make. Carruthers thinks that these patterns show that we self-attribute our propositional attitudes by self-interpretation, just like we do when we attribute mental states to other people.

While I agree that the confabulation data show that we sometimes make false self-attributions, I disagree that they support the view that we lack introspective access to our propositional attitudes. I do so by challenging Carruthers’s interpretation of the confabulation data. I show that the patterns of error that Carruthers finds in the data do not exist.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph