Skip to main content

Why it’s important to ask what forms introspection could take

In today's post, François Kammerer and Keith Frankish write about their recent special issue 'What Forms Could Introspective Systems Take?'. François is a philosopher of mind. He holds a PhD from the Sorbonne in Paris (France) and currently works as a postdoc researcher at the Ruhr-Universität Bochum (Germany). His work focuses on consciousness and introspection. 

Keith is Honorary Professor in the Department of Philosophy at the University of Sheffield, UK, Visiting Research Fellow at The Open University, UK, and Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete, Greece. He works mainly in the area of philosophy of mind and is well known for his 'illusionist' theory of consciousness. 

François Kammerer

Human beings can introspect. They can look inwards, as it were, at their own minds and tell what thoughts, experiences, and feelings they have. That is, they can form representations of their own current mental states. And they can put these representations to use, flexibly modifying their behaviour in response to information about their own current mental state. For example, on a shopping trip to the supermarket I might suddenly notice that I am extremely hungry. And since I intend to follow a strict diet and know that I am weak-willed, I decide to avoid the confectionery section of the store.

Human introspection has some unusual psychological and epistemological features, especially when contrasted with perception, and philosophers have devoted much time to speculating about it. How exactly does human introspection work? What sort of knowledge does it provide? However, there is a more general question that has been underexplored: What could introspection be? What are the possible ways in which cognitive systems — human or non-human, natural or artificial — could come to represent their own current mental states in a manner that allows them to use the information obtained for flexible behavioural control?

Keith Frankish


It is important to ask this question. If we don’t, we might assume that the human form of introspection is the only possible one, and that if introspection occurs in nonhuman animals, or ever develops in artificial intelligences, it will take the same basic form as our own, with some simplifications or variations. And this assumption might be wrong. For this reason, we have just edited a special issue of the Journal of Consciousness Studies devoted to exploring the neglected question of what introspection could be.

The issue opens with an article we coauthored, titled ‘What forms could introspective systems take? A research programme’, which serves as a target for the rest of the issue. In it, we argue that the question of what forms introspection could take is an important and fruitful one, and we give a precise, workable formulation of it. The central portion of the article then seeks to provide a preliminary map of the space of possible introspective systems. 

We focus on what we call ‘introspective devices’ — possible mechanisms for producing introspective representations. We propose that such devices can be classified along several dimensions, including, (a) how direct their processing is, (b) how conceptualized their output is, and (c) how flexible their functioning is. We define an introspective system as a set of one or more introspective devices, and we propose that such systems can be ranked in terms of how unified their component devices are.

We then use these dimensions to describe a possibility space, in which one could locate the introspective devices that various theorists have ascribed to humans, as well as a huge range of possible introspective devices that other creatures might employ.

To further refine the space of possible forms of introspection, we also examine what we call ‘introspective repertoires’. An introspective repertoire is a way of grouping and characterizing the mental states that an introspective device targets. For example, human introspection arguably groups together states on the basis of what direction of fit they have, whether they are perceptual or cognitive, and whether or not they possess intentional content, and it characterizes (conceptualizes) each group as such. However, there is no reason to think that all introspective systems would employ the same groupings and characterizations as our own, and we propose a provisional way of mapping other possible introspective repertoires.

Finally, the article proposes a research programme on possible introspective systems. We identify two routes for the exploration of introspective possibilities, one focusing on cases, the other on theories. The former looks at specific cases of introspection, either real or imaginary. Adopting this route, we might examine how different groups of humans introspect, considering differences due to culture, neurodivergence, meditative practice, and so on. We might also look how various non-human animals introspect (if they do) and ask whether and how current AI systems introspect. Finally, we might consider merely possible cases, imagining the forms introspection might take in beings such as aliens and future AIs, which have radically different forms of mentality from our own and different introspective needs.

The theory route, by contrast, involves looking at different theoretical models of introspection and of the mental states that introspection targets. By varying the parameters in these models, we should then be able to identify new introspective possibilities.

In both forms of exploration, the aim is to identify interesting possible forms of introspection — that is, ones that allow for efficient and flexible control of behaviour but are nevertheless different from the familiar human form. All this should give us insight into possible interesting ways in which a mind can introspect.

The special issue also includes fifteen contributions by philosophers and cognitive scientists, each responding in some way to our proposal.

Some contributors make direct comments on, or criticisms of, our research programme (Peter Carruthers & Christopher Masciari, Maja Spener, Daniel Stoljar). Others (Krzysztof Dołęga, Adriana Renero, Wayne Wu) discuss particular models or theories of human introspection in the context of our programme, testing and evaluating the conceptual tools we offer.

Most contributors, however, focus on some particular aspect of our research question. One looks at introspective variation among humans (Stephen Fleming). Others focus on introspection in neurodivergent individuals (Alexandre Billon) and in meditators as conceived in the Buddhist tradition (Bryce Huebner & Sonam Kachru). 

At least three pieces look at introspection in nonhuman animals (Heather Browning & Walter Veit, Maisy Englund & Michael Beran, Jennifer Mather & Michaella Andrade). One piece is devoted to introspection in current AI systems, asking whether Large Language Models, such as ChatGPT, could introspect (Robert Long), and AI introspection is also touched upon in other pieces (Heather Browning & Walter Veit, Krzysztof Dołęga, Stephen Fleming).

Finally, two contributions take a radically speculative perspective. They discuss introspection in imaginary minds very different from ours. One focuses on technologically enhanced humans (Pete Mandik). Another analyzes ‘ancillary’ artificial minds, which are intermediate between singular unified minds and group minds (Eric Schwitzgebel & Sophie Nelson).

This exciting multidisciplinary symposium is followed by a lengthy response paper in which we address the contributors’ arguments and proposals and draw some lessons for our project.

We hope that this special issue succeeds in making the case for the value of research on possible ways in which cognitive systems can introspect and that other researchers will pursue this research — ideally in unexpected directions!

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...