This is a response to Max Coltheart's post (and comments), posted on behalf of Phil Corlett.
As usual Max, a thorough, interesting and well written piece.
Second, you claim that prediction error is key to belief updating in your model. Are you aligning prediction error with Factor 1 or Factor 2? It seems Factor 1, but I wanted to check – particularly since you align Factor 2 with the functioning of right dorsolateral prefrontal cortex, which, as you know, we’ve implicated in prediction error signaling with our functional imaging studies.
Fourth, why are these monothematic delusions so circumscribed? Belief evaluation seems rather circumscribed – they do not form delusions about numerous themes like patients with schizophrenia. Wouldn’t a general belief evaluation deficit require that?
It is important to appreciate that there
are at least two different types of prediction error. There is the standard,
gradient descent or delta rule based prediction error that is employed to
update weights in particular learning settings. I’ll call this within model
prediction error, we use this to learn about the specific aspects of a model
that we have assumed to pertain. There is also a prediction error over models –
it tells us that the model we assumed is inadequate and we need to invoke
another. Computational learning theorists like Nathaniel Daw have shown
separable but interacting neural mechanisms for these errors – crucially, model
or state prediction error is located in DLPFC and parietal cortex. Could
prediction error be the algorithmic bases (Pace David Marr) for Factors 1 and
2?
Phil Corlett |
As usual Max, a thorough, interesting and well written piece.
I am curious about a couple of things.
First, you say that the prediction error signal fails in our model. Are you implying that we believe delusions form in the absence of prediction error? Our data point to the opposite case. Prediction errors are inappropriately engaged in response to events that ought not to be surprising. That is why people with delusions learn about events (stimuli, thoughts, percepts) that those without delusions would ignore.
First, you say that the prediction error signal fails in our model. Are you implying that we believe delusions form in the absence of prediction error? Our data point to the opposite case. Prediction errors are inappropriately engaged in response to events that ought not to be surprising. That is why people with delusions learn about events (stimuli, thoughts, percepts) that those without delusions would ignore.
Second, you claim that prediction error is key to belief updating in your model. Are you aligning prediction error with Factor 1 or Factor 2? It seems Factor 1, but I wanted to check – particularly since you align Factor 2 with the functioning of right dorsolateral prefrontal cortex, which, as you know, we’ve implicated in prediction error signaling with our functional imaging studies.
Third, I am curious about some specific
facets of 2 Factor theory: It seems to me that your account is based on, and
works well for what I call Neuropsychological delusions, delusions that arise
form some sort of closed head injury damaging right frontal and parietal cortex
amongst other regions. Your argument is clear, you need perceptual system damage
(Factor 1) and belief evaluation damage (Factor 2). In these neuropsychological
cases, doesn’t the damage happen simultaneously, as a result of a stroke or
accident? If so, why do they update their old belief at all?
According to your model, you would need to
wake up from your coma, have the experience of unfamiliarity for your wife,
update your belief appropriately (using the normally functioning prediction
error signal) then hold fast to your belief. To form the delusion based on the
odd experience, wouldn’t you need belief evaluation to be working normally,
then once the belief is formed, evaluation would break.
Fourth, why are these monothematic delusions so circumscribed? Belief evaluation seems rather circumscribed – they do not form delusions about numerous themes like patients with schizophrenia. Wouldn’t a general belief evaluation deficit require that?
Fifth, your model does not allow for any
top-down influence of belief on perception. Visual illusions like the rotating
hollow mask demonstrate the powerful effect that priors have on how we see the
world. Your separate factors seem not to allow for that – would you agree?
Along these lines, spontaneous
confabulation, where, after a head injury, people believe for example that they
are still engaged in their pre-injury lives (e.g. a lawyer referring to the
doctors and nurses in the hospital and judges and barristers), seems to be a
more top-down delusion like phenomenon. What is the Factor 1 here?
Further more, Max asked whether there were
any examples of people with pure Factor 1 damage who form delusions (as
prediction error theory would demand). In the absence of a clearer definition
of what Factor 1 is, I will assume that it would engender a pure sensory
deficit. There is a case report of someone with damage to the brachial plexus,
a key node of the peripheral nervous system supplying the arm and hand. After
damaging his brachial plexus (and crucially in the absence of central damage)
he began to feel that his hand had been replaced by a mechanical robot hand
that did not belong to him and began to produce elaborate narratives to explain
his experiences.
Finally, I would like to offer a consilient
explanation that might unite our models.
Perhaps that is something to be developed
in another blog post.