Skip to main content

Chandaria Lectures: Andy Clark

In this post, Sophie Stammers reports from the Chandaria Lectures, hosted by the School of Advanced Study at the University of London. Professor Andy Clark, of the University of Edinburgh, gave the annual lecture, where he introduced the notion of ‘predictive processing’. Over the course of the three lectures, he put forward the case for understanding many of the core information processing strategies that underlie perception, thought and action as integrated through the predictive processing framework.


On a model of perception popular with Cartesians, and undoubtedly dominant in areas of the cannon that I was acquainted with as an undergraduate, perception is something of a passive business. Perceivers employ malleable receptor systems that (aim to) faithfully imprint the world as it is, delivering a raw stream of information that is made sense of downstream in later processing. Clark dubs this the “cognitive couch potato view”. Despite its past popularity, this view seems incompatible with evidence from multiple research streams in cognitive science which indicate that perceivers are far from passive, and bring many of their own expectations to the table. Predictive processing (PP) aims to provide a story which both accounts for and unifies these findings, whilst also doing justice to the human experience in the midst of it all.  

PP systems don’t just take in sensory information from the world, they are constantly trying to actively predict the present sensory signals with use of probabilistic models. Incoming sensory signals are met by a flow of top-down prediction, and when this matches the sensory barrage, the system has unearthed the most likely set of causes that would give rise to the particular experience. “Prediction errors” (information about mismatches between current prediction and sensory information) indicate a gap in the predictive model, and that a new hypothesis should be selected to accommodate the current sensory signal.

Maybe, rich world-revealing perceptions – as of tables, chairs, conversations, lovers, etc – only arise from the otherwise indiscriminate sensory barrage when the incoming sensory signal can be matched with top down predictions.

Listening to sine wave speech, in which a lot of frequencies have been stripped away, can provide a nice illustration of this process. Many people find it hard to discern words in the remaining frequencies, but once we’re prompted to adopt a new model of what we’re about to hear, the percept can change quite dramatically…

The PP system also weights incoming sensory data. Top down prediction can dampen some of the sensory signals whilst amplifying others. The “mask illusion” illustrates this nicely. The system forefronts “this-is-a-face” signals whilst treating signals that contradict that as noise. Those unfamiliar with this illusion can watch celebrities grapple with their own predictive system weighting here. The illusion also seems to work with reptilian faces.


PP systems are consistent with, but don’t necessitate “inbuilt” hypotheses – Clark assured us that they may operate anywhere along the nativist-empiricist spectrum. Whilst coming ready made with some innate hypotheses might give a predictive system a head-start, we can get still get the story going with a system that generates hypotheses over and over until a match with incoming sensory information is made. Repeat the process for some other aspect of the world, until you hit on a match…and so on.

Results from a number of fMRI studies and research on attention are illuminated by the PP framework (e.g. Murray et al., 2002; Muckli, 2010; Adams et al., 2013; Schröger et al., 2015). For instance, when perceivers expect a familiar sequence of birdsong to continue in a particular way, and when the sequence is unexpectedly cut short, we see cortical activity consistent with a momentary hallucination of the missing chirp, followed by a big burst of prediction error (Adams et al., 2013).

On Clark’s account, action flows from prediction as much as perception does. A simple motor action is a matter of predicting the trajectory of proprioceptive sensations that would ensue if you were to perform that action. This generates a flow of prediction error (because you are not currently performing the predicted action). The prediction error is resolved by down-regulating the impact of sensory information specifying current bodily position, and performing the action in question.

In many cases, perception is directed at predicting how actions will unfold (Tatler et al. 2011). For instance, people gaze just ahead of the knife’s point of contact with the bread as they cut a sandwich (Hayhoe et al., 2003). Further, PP can account for “utility based skewing” in which systems favour perceptions that best serve their actions (Mark et al, 2010). Clark proposed that PP systems might be considerably geared towards action generation, and that we should think of prediction error signals as sensory information that hasn’t yet been leveraged to inform a rolling sequence of action engagement in the world.

PP might also explain something of the nature of our percepts. Clark drew our attention to a range of evidence in support of the claim that prediction of a stimulus causes it to enter conscious experience more readily, at which point it is dealt with more efficiently than unexpected stimuli. Previous expectations of visual stimuli determine when the stimuli in question enters conscious experience (Melloni et al. 2011). We also predict the intensity of pain on the basis of prior expectations, which modulate the experience accordingly (Brown et al, 2008). Further, interoceptive prediction – that based on sensing our own visceral states – plays a role in how we perceive ourselves and others - estimations of our own cardiac states interact with our perception of the intensity of another’s emotion (Gray et al. 2007). PP also fits with evidence that we bring social and cultural expectations to our percepts, such as, for instance, when perceiving an indiscriminate object as a weapon when in the hands of a person who fits our expected racial profile of a gun wielder (as in Payne, 2006).

What about the big human stuff? Our goals, intentions, motivations? Our grand projects? Clark floated the idea that it might be “prediction all the way up”. When you have some idea about how you might act in the world, then you can infer information about how your sensory barrage would change in response to performing those actions, and those predictions are what get you started. We considered a big project, and how that might be broken down into sequences of sensory matching and prediction error-driven action. For example, imagine that you want to sail a yacht around the world. This leads you to predict you’re going to pass your yacht master exam. That prediction itself entrenches how information is processed at a more local level. You predict, for instance, that you’re going to engage in flashcard revision when you can, and when you recognise an appropriate opportunity, sensory modes engage flashcard practice. And so we go on, installing expectations about our own movements though the world, and these expectations themselves lead us to match incoming sensory information or resolve prediction errors in accordance with those expectations.

PP is a big theory promising big results. We were left with a flavour of what sorts of questions future research is going tackle. What does all of this have to say about the ‘hard’ question of consciousness, or, alternatively, might it illuminate why we’re asking the wrong question there? What can PP tell us about abnormal cognitive functioning? Does the current model combine top down and bottom up processing in the right way, and how might this issue be tested? How do social and cultural influences sculpt the brain, and can we use PP to do better in these cases?

For those who predict that they will be keeping up with future research, Andy Clark is heading up a project entitled “Expecting Ourselves: Embodied Prediction and the Construction of Conscious Experience” (XSPECT) that will be investigating these and related issues over the next four years. You can minimise prediction error by visiting the project site and staying up to date on the Brains Blog.

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph