Skip to main content

Failing to Self-ascribe Thought and Motion - Part I


This post is by David Miguel Gray (pictured above), currently Assistant Professor of Philosophy at Colgate University and in the Spring of 2017 will be Assistant Professor of Philosophy at the University of Memphis. David’s research interests are in the philosophy of cognitive psychology (in particular cognitive psychopathology), as well as philosophy of mind, and philosophy of race and racism. 

In this post David provides an explanation of abnormal experiences and the inferential processes involved in delusions of thought insertion and alien control. Next week, he will address some theoretical issues with n-factor accounts of monothematic delusions. Both posts will be drawing from his recent paper ‘Failing to self-ascribe thought and motion: towards a three-factor account of passivity symptoms in schizophrenia’, published in Schizophrenia Research.

In my article I focus on two commonly known passivity symptoms of schizophrenia: thought insertion and alien control. Leaving aside the question of why delusional hypotheses are maintained in light of conflicting evidence (and become endorsed or believed by the subject) (Davies and Coltheart 2000), I focus just on the sort of explanation we need to explain how a delusional hypothesis is formed. I take the source of such a hypothesis to be an abnormal experience; and, I take it that thought insertion and alien control involve the same kind of abnormal experience (at least in one central aspect). However, much work has to be done to provide an account of how an abnormal experience could even give rise to a delusional hypothesis such as ‘Bob is putting thoughts into my head’ or ‘I wanted to raise my arm but then Marissa did it’.

Why should we think that abnormal experience alone is insufficient for providing prima facie justification for a delusional hypothesis? Delusional hypotheses are normally complex enough that they outstrip what experience alone could justify. Having a weak autonomic response when I see my wife does not provide prima facie justification for the delusional hypothesis that my wife has been replaced by an imposter; although, it might provide prima facie justification for the ‘proto-hypothesis’ that I have low emotional arousal in front of what appears to be my wife. (By ‘proto-hypothesis’ I mean a description of the abnormal experience, simpliciter.) I take it that experiences which would justify proto-hypotheses can more easily be correlated with mechanisms (e.g. weak autonomic responses) that give rise to abnormal experiences. 

This is a good thing. If we had to assume that abnormal experiences were complex enough to justify delusional hypotheses, making correlations between abnormal experiences and the mechanisms that are supposed to explain them would be all the more difficult. Would it even be possible to have a my-wife-has-been-replaced-by-an-impostor experience? Furthermore, could we correlate such a complex abnormal experience with just a weak autonomic response? Explaining these inferential steps will aid us in providing an explanation of the sorts of inferences required for delusional hypotheses formation. As importantly, recognizing what we do not have to ‘build in’ to our characterizations of abnormal experiences will most likely result in a better correlation between the abnormal experience and the cognitive models we use to describe such experiences (I will not address this last point here, but I do in my article).

Getting back to abnormal experiences, I claim that the delusional hypotheses formed in cases of thought insertion and alien control stem from an abnormal experience which requires the subject to identify who it is that is thinking a thought or performing an action. I will put it differently, using thought insertion as an example. I argue that for a thought which is claimed by a subject to be experienced as inserted—let’s say a thought with content p—the proto-hypothesis would be something like ‘this thought p that I am introspectively experiencing requires identification’. I’m sure the content of my claim is a bit confusing itself, so here is an argument for it.

We can think of thought insertion reports as fundamentally involving an introspection-based error through misidentification. That is, on the basis of introspective information, subjects who report experiences of thought insertion have misidentified the origin of their thought (e.g. ‘There is a thought in my head, but it is not mine, it’s Bob’s thought’). What is unusual about this, at least for many philosophers, is that the reason we are so good at self-ascribing our own thoughts to ourselves is not that we are really good at identifying which thoughts are ours and which are not. Rather, what is special about our introspective access to our thoughts is that we do not need to identify whose thoughts they are at all (Wittgenstein 1958, Shoemaker 1968). 

To use an old example, if I realize that I have a toothache, it doesn’t make sense to ask ‘There is a toothache going on, but is it mine?’ Similarly, in the regular experience of our own thoughts it doesn’t make sense to ask, on the basis of introspective experience, ‘There is a thought going on, but is it mine?’ However, in the case of thought insertion a misidentification is made on the basis of introspective experience. And, if a misidentification occurred, that means that an act of identification had to occur. This requirement to identify one’s own thoughts, even when one has introspective access to them, clearly demarcates ‘inserted’ thoughts from thoughts as normally experienced.

We might ask, ‘Why would the mere fact that one has to identify one’s thoughts lead to one thinking that they are not one’s own thoughts?’ After all, even if one has an abnormal experience that requires one to identify a thought, this does not require the assignment of that thought to someone else.

I think that the reason thoughts experienced in this way are in fact assigned to another person is because it is a hallmark of regularly experienced thoughts that we never have to identify whose thoughts they are. Given this, having an experience which raises an issue of identification is a prima facie reason to think they are not one’s own thoughts (I attempt to give a full account of how one would get from this proto-hypothesis to the delusional hypothesis in my article).

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph