Thursday 23 May 2019

Growing Autonomy (2)

This cross-disciplinary symposium on the nature and implications of human and artificial autonomy was organised by Anastasia Christakou and held at the Henley Business School at the University of Reading on 8th May 2019. You can find a report on the first part of the workshop here.



First talk in the second half of the workshop was by Daniel Dennett (Tufts) and Keith Frankish (Sheffield), exploring how we can build up to consciousness and autonomy. They endorsed an "engineering approach" to solving hard philosophical problems, such as the problem of consciousness, and asked: How can we get a drone to do interesting things? For instance, recognise things? We can start by supposing that it has sensors for recognising and responding to stimuli.




There will also be a hierarchy of feature detectors and a suite of controllers who will take multiple inputs and vary outputs depending on their combination and strength. When it comes to action selection and conflict resolution, there will be outputs competing to control the effectors. This process will need tuning up but how can it be adapted to different training environments? The drone is sent out and its performance reviewed.




The drone will have a manifest image of the world because it can recognise a variety of things, and it will also have reasons for the things it does. Some of these reasons will have been designed by the engineers and other reasons will have evolved. But the architecture will be fixed (e.g., drones won't be able to acquire new recognition skills). So we need to design and build drones that develop new controllers and detectors.

The Recogniser Drone becomes a Regulator Drone. The key is to add detector and controller generators and construct a response monitor. And the process continues: the Regulator becomes the Reporter and the complexity increases. The Reporter becomes The Ruminator and interprets its self-generated signals: by representing its manifest image, it becomes conscious of its manifest image, making those representations salient, triggering associations, and carry information about its own reactions.


Next speaker was... me (Lisa Bortolotti, University of Birmingham). I brought a bit of project PERFECT to the workshop, arguing that some beliefs that are epistemically rational may be supportive of agency. I used the literature on broad confabulation and unrealistic optimism as the principal case studies.




If we want to understand human agency and build artificial agency, then we need to recognise not only the strengths and distinctiveness of human agency but also its limitations. We would assume that epistemically irrational beliefs compromise agency. However, there are some irrational beliefs, those that contribute to a view of ourselves as competent, coherent, and efficacious agents, which are instrumental to our pursuing our goals in the face of setbacks and also enhance our chances of attaining our goals.

Optimistic beliefs and confabulated explanation in some contexts play this role. The implication is that when we aim at reducing or eliminating irrationality then we need to (1) be able to distinguish irrational beliefs that are helpful from those that are unhelpful (as we are trying to do at project PERFECT with the notion of epistemic innocence) and (2) think about what will play the positive role of supporting our agency when we replace the irrational beliefs that helpful with less irrational ones.

The last talk was by Murray Shanahan (Imperial College) and addressed the relationship between intelligence and consciousness in artificial intelligence. We tend to think of AI systems as having specialist intelligence as they have specific applications, but we aim to build AI systems that are generally intelligent.




Three obstacles to general intelligence in AI systems include:

  • Common Sense - how do we give AI systems an understanding of the everyday world, of people and things?
  • Abstract Concepts - how do we give them the ability to think abstractedly?
  • Creativity - how do we get them to be creative?

To do that, we need deep reinforcement learning, virtual environments, curricula and lifelong learning as applied to machines. Cognitive integration is the ideal to pursue: this is when the full resources of the brain are brought to bear on a situation. Another thing that is needed is imagination which makes mental time travel, creativity and language possible. Further, reflexive cognition is required to ensure introspection and awareness of one's own beliefs and feelings.

What we may not need for general intelligence are awareness of self and the capacity for suffering. Shanahan argues that if we can build generally intelligent AI systems without self-awareness and the capacity for suffering, we should do so, to avoid ethical complications. But can we build generally intelligent AI without selfhood? Shanahan explored several options to avoid creating AI systems with selfhood, including creating them without a body.


The workshop ended with a panel discussion with the audience. This was a very informative and genuinely cross-disciplinary event encompassing many topical issues in the philosophy of mind and artificial intelligence.


No comments:

Post a Comment

Comments are moderated.