In today's post, François Kammerer
and Keith Frankish write about their recent special issue 'What Forms Could Introspective Systems Take?'. François is a philosopher of mind. He holds a PhD from the Sorbonne in Paris (France) and currently works as a postdoc researcher at the Ruhr-Universität Bochum (Germany). His work focuses on consciousness and introspection.
Keith is Honorary Professor in the Department of Philosophy at the University of Sheffield, UK, Visiting Research Fellow at The Open University, UK, and Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete, Greece. He works mainly in the area of philosophy of mind and is well known for his 'illusionist' theory of consciousness.
Human introspection has some unusual psychological and epistemological features, especially when contrasted with perception, and philosophers have devoted much time to speculating about it. How exactly does human introspection work? What sort of knowledge does it provide? However, there is a more general question that has been underexplored: What could introspection be? What are the possible ways in which cognitive systems — human or non-human, natural or artificial — could come to represent their own current mental states in a manner that allows them to use the information obtained for flexible behavioural control?
François Kammerer |
Human beings can introspect. They can look inwards, as it were, at their own minds and tell what thoughts, experiences, and feelings they have. That is, they can form representations of their own current mental states. And they can put these representations to use, flexibly modifying their behaviour in response to information about their own current mental state. For example, on a shopping trip to the supermarket I might suddenly notice that I am extremely hungry. And since I intend to follow a strict diet and know that I am weak-willed, I decide to avoid the confectionery section of the store.
Human introspection has some unusual psychological and epistemological features, especially when contrasted with perception, and philosophers have devoted much time to speculating about it. How exactly does human introspection work? What sort of knowledge does it provide? However, there is a more general question that has been underexplored: What could introspection be? What are the possible ways in which cognitive systems — human or non-human, natural or artificial — could come to represent their own current mental states in a manner that allows them to use the information obtained for flexible behavioural control?
Keith Frankish |
It is important to ask this question. If we don’t, we might assume that the human form of introspection is the only possible one, and that if introspection occurs in nonhuman animals, or ever develops in artificial intelligences, it will take the same basic form as our own, with some simplifications or variations. And this assumption might be wrong. For this reason, we have just edited a special issue of the Journal of Consciousness Studies devoted to exploring the neglected question of what introspection could be.
The issue opens with an article we coauthored, titled ‘What forms could introspective systems take? A research programme’, which serves as a target for the rest of the issue. In it, we argue that the question of what forms introspection could take is an important and fruitful one, and we give a precise, workable formulation of it. The central portion of the article then seeks to provide a preliminary map of the space of possible introspective systems.
The issue opens with an article we coauthored, titled ‘What forms could introspective systems take? A research programme’, which serves as a target for the rest of the issue. In it, we argue that the question of what forms introspection could take is an important and fruitful one, and we give a precise, workable formulation of it. The central portion of the article then seeks to provide a preliminary map of the space of possible introspective systems.
We focus on what we call ‘introspective devices’ — possible mechanisms for producing introspective representations. We propose that such devices can be classified along several dimensions, including, (a) how direct their processing is, (b) how conceptualized their output is, and (c) how flexible their functioning is. We define an introspective system as a set of one or more introspective devices, and we propose that such systems can be ranked in terms of how unified their component devices are.
We then use these dimensions to describe a possibility space, in which one could locate the introspective devices that various theorists have ascribed to humans, as well as a huge range of possible introspective devices that other creatures might employ.
To further refine the space of possible forms of introspection, we also examine what we call ‘introspective repertoires’. An introspective repertoire is a way of grouping and characterizing the mental states that an introspective device targets. For example, human introspection arguably groups together states on the basis of what direction of fit they have, whether they are perceptual or cognitive, and whether or not they possess intentional content, and it characterizes (conceptualizes) each group as such. However, there is no reason to think that all introspective systems would employ the same groupings and characterizations as our own, and we propose a provisional way of mapping other possible introspective repertoires.
Finally, the article proposes a research programme on possible introspective systems. We identify two routes for the exploration of introspective possibilities, one focusing on cases, the other on theories. The former looks at specific cases of introspection, either real or imaginary. Adopting this route, we might examine how different groups of humans introspect, considering differences due to culture, neurodivergence, meditative practice, and so on. We might also look how various non-human animals introspect (if they do) and ask whether and how current AI systems introspect. Finally, we might consider merely possible cases, imagining the forms introspection might take in beings such as aliens and future AIs, which have radically different forms of mentality from our own and different introspective needs.
The theory route, by contrast, involves looking at different theoretical models of introspection and of the mental states that introspection targets. By varying the parameters in these models, we should then be able to identify new introspective possibilities.
In both forms of exploration, the aim is to identify interesting possible forms of introspection — that is, ones that allow for efficient and flexible control of behaviour but are nevertheless different from the familiar human form. All this should give us insight into possible interesting ways in which a mind can introspect.
The special issue also includes fifteen contributions by philosophers and cognitive scientists, each responding in some way to our proposal.
Some contributors make direct comments on, or criticisms of, our research programme (Peter Carruthers & Christopher Masciari, Maja Spener, Daniel Stoljar). Others (Krzysztof Dołęga, Adriana Renero, Wayne Wu) discuss particular models or theories of human introspection in the context of our programme, testing and evaluating the conceptual tools we offer.
Most contributors, however, focus on some particular aspect of our research question. One looks at introspective variation among humans (Stephen Fleming). Others focus on introspection in neurodivergent individuals (Alexandre Billon) and in meditators as conceived in the Buddhist tradition (Bryce Huebner & Sonam Kachru).
We then use these dimensions to describe a possibility space, in which one could locate the introspective devices that various theorists have ascribed to humans, as well as a huge range of possible introspective devices that other creatures might employ.
To further refine the space of possible forms of introspection, we also examine what we call ‘introspective repertoires’. An introspective repertoire is a way of grouping and characterizing the mental states that an introspective device targets. For example, human introspection arguably groups together states on the basis of what direction of fit they have, whether they are perceptual or cognitive, and whether or not they possess intentional content, and it characterizes (conceptualizes) each group as such. However, there is no reason to think that all introspective systems would employ the same groupings and characterizations as our own, and we propose a provisional way of mapping other possible introspective repertoires.
Finally, the article proposes a research programme on possible introspective systems. We identify two routes for the exploration of introspective possibilities, one focusing on cases, the other on theories. The former looks at specific cases of introspection, either real or imaginary. Adopting this route, we might examine how different groups of humans introspect, considering differences due to culture, neurodivergence, meditative practice, and so on. We might also look how various non-human animals introspect (if they do) and ask whether and how current AI systems introspect. Finally, we might consider merely possible cases, imagining the forms introspection might take in beings such as aliens and future AIs, which have radically different forms of mentality from our own and different introspective needs.
The theory route, by contrast, involves looking at different theoretical models of introspection and of the mental states that introspection targets. By varying the parameters in these models, we should then be able to identify new introspective possibilities.
In both forms of exploration, the aim is to identify interesting possible forms of introspection — that is, ones that allow for efficient and flexible control of behaviour but are nevertheless different from the familiar human form. All this should give us insight into possible interesting ways in which a mind can introspect.
The special issue also includes fifteen contributions by philosophers and cognitive scientists, each responding in some way to our proposal.
Some contributors make direct comments on, or criticisms of, our research programme (Peter Carruthers & Christopher Masciari, Maja Spener, Daniel Stoljar). Others (Krzysztof Dołęga, Adriana Renero, Wayne Wu) discuss particular models or theories of human introspection in the context of our programme, testing and evaluating the conceptual tools we offer.
Most contributors, however, focus on some particular aspect of our research question. One looks at introspective variation among humans (Stephen Fleming). Others focus on introspection in neurodivergent individuals (Alexandre Billon) and in meditators as conceived in the Buddhist tradition (Bryce Huebner & Sonam Kachru).
At least three pieces look at introspection in nonhuman animals (Heather Browning & Walter Veit, Maisy Englund & Michael Beran, Jennifer Mather & Michaella Andrade). One piece is devoted to introspection in current AI systems, asking whether Large Language Models, such as ChatGPT, could introspect (Robert Long), and AI introspection is also touched upon in other pieces (Heather Browning & Walter Veit, Krzysztof Dołęga, Stephen Fleming).
Finally, two contributions take a radically speculative perspective. They discuss introspection in imaginary minds very different from ours. One focuses on technologically enhanced humans (Pete Mandik). Another analyzes ‘ancillary’ artificial minds, which are intermediate between singular unified minds and group minds (Eric Schwitzgebel & Sophie Nelson).
This exciting multidisciplinary symposium is followed by a lengthy response paper in which we address the contributors’ arguments and proposals and draw some lessons for our project.
We hope that this special issue succeeds in making the case for the value of research on possible ways in which cognitive systems can introspect and that other researchers will pursue this research — ideally in unexpected directions!
Finally, two contributions take a radically speculative perspective. They discuss introspection in imaginary minds very different from ours. One focuses on technologically enhanced humans (Pete Mandik). Another analyzes ‘ancillary’ artificial minds, which are intermediate between singular unified minds and group minds (Eric Schwitzgebel & Sophie Nelson).
This exciting multidisciplinary symposium is followed by a lengthy response paper in which we address the contributors’ arguments and proposals and draw some lessons for our project.
We hope that this special issue succeeds in making the case for the value of research on possible ways in which cognitive systems can introspect and that other researchers will pursue this research — ideally in unexpected directions!