Skip to main content

Growing Autonomy (2)

This cross-disciplinary symposium on the nature and implications of human and artificial autonomy was organised by Anastasia Christakou and held at the Henley Business School at the University of Reading on 8th May 2019. You can find a report on the first part of the workshop here.



First talk in the second half of the workshop was by Daniel Dennett (Tufts) and Keith Frankish (Sheffield), exploring how we can build up to consciousness and autonomy. They endorsed an "engineering approach" to solving hard philosophical problems, such as the problem of consciousness, and asked: How can we get a drone to do interesting things? For instance, recognise things? We can start by supposing that it has sensors for recognising and responding to stimuli.




There will also be a hierarchy of feature detectors and a suite of controllers who will take multiple inputs and vary outputs depending on their combination and strength. When it comes to action selection and conflict resolution, there will be outputs competing to control the effectors. This process will need tuning up but how can it be adapted to different training environments? The drone is sent out and its performance reviewed.




The drone will have a manifest image of the world because it can recognise a variety of things, and it will also have reasons for the things it does. Some of these reasons will have been designed by the engineers and other reasons will have evolved. But the architecture will be fixed (e.g., drones won't be able to acquire new recognition skills). So we need to design and build drones that develop new controllers and detectors.

The Recogniser Drone becomes a Regulator Drone. The key is to add detector and controller generators and construct a response monitor. And the process continues: the Regulator becomes the Reporter and the complexity increases. The Reporter becomes The Ruminator and interprets its self-generated signals: by representing its manifest image, it becomes conscious of its manifest image, making those representations salient, triggering associations, and carry information about its own reactions.


Next speaker was... me (Lisa Bortolotti, University of Birmingham). I brought a bit of project PERFECT to the workshop, arguing that some beliefs that are epistemically irrational may be supportive of agency. I used the literature on broad confabulation and unrealistic optimism as the principal case studies.




If we want to understand human agency and build artificial agency, then we need to recognise not only the strengths and distinctiveness of human agency but also its limitations. We would assume that epistemically irrational beliefs compromise agency. However, there are some irrational beliefs, those that contribute to a view of ourselves as competent, coherent, and efficacious agents, which are instrumental to our pursuing our goals in the face of setbacks and also enhance our chances of attaining our goals.

Optimistic beliefs and confabulated explanation in some contexts play this role. The implication is that when we aim at reducing or eliminating irrationality then we need to (1) be able to distinguish irrational beliefs that are helpful from those that are unhelpful (as we are trying to do at project PERFECT with the notion of epistemic innocence) and (2) think about what will play the positive role of supporting our agency when we replace the irrational beliefs that helpful with less irrational ones.

The last talk was by Murray Shanahan (Imperial College) and addressed the relationship between intelligence and consciousness in artificial intelligence. We tend to think of AI systems as having specialist intelligence as they have specific applications, but we aim to build AI systems that are generally intelligent.




Three obstacles to general intelligence in AI systems include:

  • Common Sense - how do we give AI systems an understanding of the everyday world, of people and things?
  • Abstract Concepts - how do we give them the ability to think abstractedly?
  • Creativity - how do we get them to be creative?

To do that, we need deep reinforcement learning, virtual environments, curricula and lifelong learning as applied to machines. Cognitive integration is the ideal to pursue: this is when the full resources of the brain are brought to bear on a situation. Another thing that is needed is imagination which makes mental time travel, creativity and language possible. Further, reflexive cognition is required to ensure introspection and awareness of one's own beliefs and feelings.

What we may not need for general intelligence are awareness of self and the capacity for suffering. Shanahan argues that if we can build generally intelligent AI systems without self-awareness and the capacity for suffering, we should do so, to avoid ethical complications. But can we build generally intelligent AI without selfhood? Shanahan explored several options to avoid creating AI systems with selfhood, including creating them without a body.


The workshop ended with a panel discussion with the audience. This was a very informative and genuinely cross-disciplinary event encompassing many topical issues in the philosophy of mind and artificial intelligence.


Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph