Tuesday 28 March 2017

Irrational Emotions, Rational Decisions, and Artificial Intelligence



Thomas Ames (pictured above) is a graduate student in philosophy at the University of Missouri-St. Louis and has interests in epistemology, agency, and disorders of selfhood. In this post, he summarizes some of his current research into what role irrational emotions may play when making rational decisions, and what that may mean for the future of artificial intelligence.

Quite a bit has been written on the role of emotion on the decision-making process. Using cases of traumatic brain injuries that have led to defects in both emotion and rational decision-making, several theories with a neurological framework have been proposed about why that may be. One such prominent theory, somatic marker hypothesis (1, 2), introduced by Antonio Damasio (University of Southern California), posits that emotions play an integral neurological role in decision-making. This is because it was found that in cases of specific brain lesions which affect patients’ emotions, their abilities to make decisions were also adversely affected. It follows, then, that the two are connected: there must be some relationship between emotions, decisions, and acting upon them.

While much research has been done on what might seem like rational emotions and rational decisions, perhaps less has been said about the role of irrational emotions on rational decisions. With bias we may intuit that our emotions are largely rational; however, that may not always be the case. Imagine our life experiences and the role emotions play into our decisions: I may wake up in the morning and decide to put on a pair of Star Trek Spock socks because I’m a fan of Star Trek and the Spock socks feature articulated, wiggle-able ears that I find hilarious. In this sense I believe it’s humorous, I believe others may find it humorous, or at the very least I believe that others will infer my interest in Star Trek by virtue of wearing the themed socks. In this case most of us would likely say these are rational beliefs and I’ve made a rational decision based on rational emotion. To an extent, Spock would be proud.

However, let’s take another case from daily life: I might put on my clothes from left to right because it feels odd not to do it in that order. Left foot, right foot; left arm, right arm; so on and so forth; variation isn’t permitted, or otherwise, emotionally, something feels not right. What’s going on here? It seems an irrational emotion plays some part in controlling my decision-making here: in order to not feel funny — since there’s truly no other penalty for not adhering to this preference — I make a rational decision. It seems that we can act upon decisions made rationally — or even irrationally — and, further, those can be based on rational or irrational emotions.

So what’s this have to do with anything? A particularly interesting facet of this discovery is what impact it might have on the future of artificial intelligence. Most might say that future AI are likely to always act rationally — or logically — or at least within the parameters of how its program allows the AI to learn, decide, and act. However, especially in terms of eventually passing the Turing Test, this realization seems to suggest that future AI must both be able to act rationally and irrationally, or at least display some sort of irrationality that leads to rational decisions in order to mimic humans realistically. Even if we may not ascribe emotion to AI, it may seem plausible that this mimicry is a necessary component of making it as human-like as possible.

 This, of course, leads to further important considerations, specifically: if we create AI that acts rationally, and it rationally wants to protect its own existence, might it necessarily act against humanity? and, similarly, if we create AI that acts irrationally, can we ever trust that the AI’s irrationality won’t result in it acting against humanity? Our consideration of the role of emotion may have a greater effect on the future of AI than we initially thought: to not impart emotion, AI might very well act against humanity. Then again, if that emotion is sometimes irrational, just as humans’ emotions sometimes are, then maybe that’s not the saving grace we require after all.

3 comments:

  1. Hi, Thomas (if I may).

    Interesting matter; I think we should not deliberately make AI that would ever be irrational, but that aside, I'd like to raise a couple of issues:

    1. Would Spock pass the Turing Test, when behaving as usual? (or, let's stipulate, being fully rationally just in case).

    If the answer is affirmative, then it seems to me that displaying irrationality is not needed.
    If the answer is negative, then I'd suggest maybe passing the Turing Test isn't so important after all. Wouldn't an AI that passes a Spock/Turing Test do?

    2. Does convincingly pretending to be human require behaving irrationally if humans do?
    For example, an actor can play a role and pretend to behave in an irrational manner. But she's not behaving irrationally. She's pretending to, in order to achieve her goal of playing her role convincingly - and that seems rational.

    ReplyDelete
  2. Hello, Angra!

    Interesting questions! Let me see if we can't develop this a bit further!

    1. Spock is a funny character. As a Trekker, Spock has always been a bit "odd" to me, as they continually make reference to not feeling emotion, and yet in other parts he makes references to emotional aspects. Clearly here there's some delineation between perhaps strong emotions and implicit emotion. But more telling is that Spock, in his supposed non-emotional state, doesn't act like humans do when they're afflicted with the same; that is, when humans are unable to feel emotion, it seems it's very difficult for them to make decisions. Spock, perhaps because their emotionless comes from their culture (remember, Vulvans and Romulans share ancestry), some ability to make decisions despite their [supposed] pure rationality remains.

    2. I should say that's absolutely the case. When something seems to act perfectly rationality, there seems to be something a bit "off" about that person, or their statements are clearly atypical from what we might see from most of society. We expect some level of irrationality or, perhaps more appropriately, some indication that emotion is affecting an otherwise rational decision. In the case of AI, I think a human would more than likely see that either this is a very oddly behaving human or that it is indeed a computer with no sense of shame in its far-too-rational decision-making processes. To inject a bit of irrationality -- even a sliver -- gives it its humanity, if you will. It needn't each levels of chaos, randomness, or anarchist, but you should see some sliver of irrationality here and there that makes us unique. I think an AI will need to do the same in order to pass the Turing Test convincingly.

    Great thoughts and questions! Thank you so much for writing!

    ReplyDelete
  3. Thank you for your reply, Thomas!

    With regard to 2., what I was trying to get at is that actors can pretend to behave irrationally without behaving irrationally. Why can't an AI pretend to act human when it talks to humans (including some apparent irrationality, if needed) without actually behaving irrationally?

    In Re: Spock, what I'm trying to say is: could an AI pass a sort of Vulcan Turing Test without any irrational or pretend-irrational behavior?
    If it can, isn't a Vulcan Turing Test good enough, at least for most purposes one may want an AI for? Is it too big a problem if they don't pass for human, but still do their jobs? (at least, all of the jobs that don't need that they pass for humans, which include most. Vulcans could do most human jobs).

    ReplyDelete

Comments are moderated.