Thursday, 31 August 2017

Cognitive Phenomenology: An interview with Peter Carruthers

In this post Federico Bongiorno (PhD student at the University of Birmingham) interviews Peter Carruthers, Professor of Philosophy at the University of Maryland, College Park, MD. Peter’s research has focused predominantly on philosophy of mind, philosophy of psychology, and cognitive science. Here, Federico and Peter (pictured below) discuss Peter's position on the debate over cognitive phenomenology.





FB: Your recent work has focussed, among other things, on the question of cognitive phenomenology. Roughly, the question amounts to asking whether cognition has its own phenomenal character. Can you tell us more about this issue and its significance?

PC: The first thing that I ought to mention is that this is joint work done with Bénédicte Viellet. The issue is essentially whether thought has a phenomenal character that is not reducible to other kinds of phenomenal character. Thought is often associated with phenomenal states. As you listen to me speaking now, you are extracting meaning. At the same time, you have the phonology of the sentences that I am using and you might also be forming visual images or other kinds of affective associations. 

So there is going to be a whole wealth of phenomenal character that goes along with the meaning of any particular sentence that I utter. The question of cognitive phenomenology can be stated as follows: is there some distinctive phenomenology that belongs to the concepts and propositions themselves, that doesn’t just reduce to all the surrounding stuff? 

For instance, when you think a thought, there is a phenomenology of inner speech. But is there also a phenomenology that is distinctive to the thought that you are thinking in inner speech? If you could have the pure thought, would that have a phenomenology in its own right, independent of its causes and effects on the rest of your mental life?

I became interested in these sorts of questions back when I was working on qualia and phenomenal consciousness. It seemed to me that what gave rise to the hard problem of consciousness was distinctively to do with those kinds of mental states that you can form recognitional concepts for – as happens, for instance, when you experience red and form a concept for the way the experience of red is for you. These mental states do in fact give rise to thought-experiments of the ‘hard problem’ sort. 

You can have, for instance, zombie thought experiments, and speculate that zombies might be able to employ direct recognitional concepts of their brain states, even if those states have no associated phenomenal quality. But you can also have inverted-spectrum thought experiments, where an experience which we form the direct recognitional concept of red for is caused by perception of green. 

What occurred to me is that we don’t have analogous recognitional concepts for thoughts – the idea that you can, for instance, recognise the occurrence of the concept seven being tokened in yourself struck me as implausible. These considerations motivated me to argue that there is no cognitive phenomenology, as thoughts and conceptual states do not give rise to the sort of hard-problem thought-experiments that perceptual states do. My view is that we ought to maintain the original position, viz., that phenomenology belongs with the sensitive, whilst cognitive states do get bound into sensory states but do not add any distinctive phenomenal component on their own.

FB: A discussion of cognitive phenomenology is bound to make some reference to the distinction between so-called access-consciousness and phenomenal-consciousness. Why is this distinction important?

PC: I have come to appreciate the importance of this distinction especially in my latest paper co-authored with Benedict (Carruthers & Viellet 2017). And really the importance of it emerged for us fairly late in the process. The sorts of thought-experiments that defenders of cognitive phenomenology come up with do not seem to take access-consciousness seriously enough. 

Take Zombie thought-experiments. Here you have access-consciousness – in the sense that you have functional content-bearing states that are playing the same role they are playing in a normal person – but phenomenal consciousness is missing. Zombies are supposed to be access-conscious because access-consciousness is defined in terms of its functional role and zombies are functionally exactly like normal people. Yet when people develop thought-experiments to make a convincing case for cognitive phenomenology, they present us with scenarios that are allegedly different in phenomenal consciousness but are not fully functionally identical. 

One kind of case concerns a French speaker and a non-French speaker who both listen to the same sentence. The upshot is that the two speakers must have a different overall experiences, in that one understands the meaning of the sentence and the other does not. What proponents of cognitive phenomenology fail to appreciate, however, is that there is a significant difference in access-consciousness too, because one speaker can reliably report what was said, whereas the other is just lost. 

The same happens when people are trying to devise arguments that are more alike to the zombie thought-experiments. You may be asked to imagine a person who is just like you in all physical, sensory, and functional respects, but meaning-blind - everything they are told or everything they themselves utter feels like meaningless gibberish. Yet once you start thinking through what this person must be like in order to be an access-conscious equivalent of a normal person, you soon realise that the experiment has been poorly designed. 

Since it seems that this person is at least able to verbally report what was said and form beliefs accordingly, it is not clear in what sense they are meaning-blind – what is it that is missing in a meaning-blind person? I think that once you realise that anyone who is physically and functionally just like you must be aware of the meaning of any utterance at least in the access-conscious sense of ‘aware’, then much of the appeal of these sorts of thought-experiments is lost.

FB: It is your view that phenomenal consciousness extends beyond the purely sensory realm of experience but you also make the case that phenomenology is exclusively non-cognitive and non-conceptual in character. How do these two claims go together? Can you provide an example of phenomenally conscious states that are neither conceptual nor sensory?

PC: There is a number of non-conceptual representations that are non-sensory ones. An interesting case is that of what is generally referred to in psychology as valence, a term which designates the evaluative component of emotions and affective states, including pain. On standard theories, pain has two components: a sensory component and an evaluative component. The sensory component tells us which part of the body is affected, how intense the pain is, whether is stabbing or throbbing, and so on. The evaluative component is the unpleasant feeling, the felt badness, associated with the sensation. 

Cognitive scientists have come increasingly to treat valence as unitary across different domains. And indeed one view of decision-making has it that valence might function as a kind of common currency for comparison. Suppose that you have to weigh up how much pain you are willing to endure in exchange for monetary return. How can you achieve a satisfactory trade-off between the pain and the gain involved? 

The suggested answer is that you compare them one the basis of your associated evaluative response. There is a degree of negative valence that goes with the pain and there is a degree of positive valence that goes with the monetary reward. Both of these values are taken into consideration in calculating the overall valence.

Valence is a nonconceptual representation of value that does not require any categorisation of things into good and bad. But it is also amodal and non-sensory, in that it operates independently of modalities, above sensory domains. So I think valence provides a good example of a conscious state with a distinctive phenomenology which is neither conceptual nor sensory. This can be appreciated by the fact that one can use valence to run hard-problem thought-experiments.

Let us consider an equivalent of Jackson’s colour-deprived-Mary thought-experiment. Imagine someone with congenital pain asymbolia (a condition whereby the affected individual feels sensations of pain but is not bothered by them) who becomes a renowned neuroscientist and learns everything there is to know about the physical and functional organisation of the pain transmission system.

In stabbing a sharp pencil into her skin for the first time after being cured, she might say something like: ‘That’s what the badness of pain feels like’. If phenomenal consciousness is conceived as whatever gives rise to thought experiments of the ‘hard problem’ sort, then there is good reason to think that valence is phenomenally conscious, while at the same time being neither sensory nor conceptual. 

Cases like that of valence motivated me and Bénédicte to propose a realignment of the debate over cognitive phenomenology. The central point of contention shouldn’t be between sensory and alleged cognitive phenomenology, but rather between non-conceptual (including non-sensory) and alleged conceptual phenomenology.

FB: What are your future research plans?

PC: I have a number of projects on the go at the moment. One involves further developing a view of the nature of curiosity, which is characterised by most philosophers and cognitive scientists in terms of a drive for information or a desire for knowledge. So described, curiosity is a metacognitive state. In a desire for knowledge, the content of the desire has the content knowledge embedded in it. And yet bees are curious; your cat is curious when you pull out some strange little toy.

This motivated me to think about how we should really understand what is going on in curiosity. What I postulate is that it is a distinctive questioning attitude, or to put it otherwise, a desire-like attitude which unlike desire can take questions as content – Who, What, Where, When, Why, How, but other sorts of questions too. I also suggest that curiosity motivates you to seek information but doesn’t necessarily involve that you are representing information. One could see a parallel here with fear, in that fear motivates you to seek safety but does not itself represent safety.

Similarly, I suggest that curiosity motivates actions to seek information without representing that you are learning anything. There is no need of the concept knowledge or learning for one to feel curious. My view is that curiosity is a basic ingredient of both human and animal cognition. I think that any animal that moves around the word in an interesting way, especially those that forage resources which are repeatable in locations, share an exploratory drive of some sort. I did this in a previous paper and now I want to take my stance into the human development literature.

No comments:

Post a Comment

Comments are moderated.