Skip to main content

Conscious Will, Unconscious Mind


It was a pleasure be invited to the “Conscious Will and the Unconscious Mind” workshop, held at the Department of Philosophy, University of Duisburg-Essen, on the 28th of June this year. Organised by Astrid Schomäcker and Neil Roughley, the workshop intended to explore whether influences like implicit biases present a threat to free, responsible agency and a series of related questions. The following is a summary of the talks by the three speakers.


Sven Walter Osnabrück, Professor of Philosophy at the Institute of Cognitive Science, University of Osnabrück, began by outlining two opposing ideas about the role of science in the free will debate. Firstly: free will incompatible with a naturalistic view of world (a view that often crops up in popular science magazines and journals). Secondly: the question of what free will amounts to is a philosophical one, and so empirical science is not the appropriate disciplinary home for an investigation into free will. For Sven, neither of these ideas are quite right.

Sven argued that there are two distinct projects to be done in each discipline. Project 1: we need to establish what conditions would need to hold in order for free will to obtain. This is a conceptual question, and the appropriate work is philosophical. Only then comes project 2: using the methods of empirical science, we need to establish whether those specified conditions in fact obtain.

Sven then considered various free will theories to come out of project 1-type work, and whether any project 2-type work shows whether that sort of free will does not obtain. For instance, if project 2-type work shows that determinism is true (or, perhaps, is our best model of the world) then this would only rule out libertarian kinds of free will - those that rely on a strong interpretation of the possibility of doing otherwise, compatibilist free will accounts are safe.

Project 2 work demonstrating that unconscious influences on action which render our actions normatively detached from our system of reflected preferences and values might show that free will is limited. Sven went over some examples of this sort of work that might show that free will is impaired, but qualified the claims with the acknowledgement of the replicability crisis in social psychology (like this, for example). He discussed whether a capacity for reasons responsiveness (such as in the theory of Fischer and Ravizza 1998) could save free will in these cases, but worried what the use of a capacity is if we regularly fail to exercise it in a wide variety of circumstances.


Beate Krickel, Principal Investigator and Scientific Coordinator of the Situated Cognition Group at Ruhr-University of Bochum, gave the next talk. Beate started by drawing on research showing that implicit biases are sometimes available to awareness. For instance, in a study by Hahn et al. (2014), participants were able to predict the content of their biases. Beate also drew on Gawronski and Bodenhausen’s (2014) APE model, in which the rejection of propositions contrary to those already accepted appears to be a conscious process. And yet, as Beate pointed out, people are still often surprised to learn they have implicit biases. What should we make of this?

Beate suggested we could draw on work on repression in the Freudian tradition to understand the puzzle. Repression starts with an inner conflict between beliefs and desires. Beate drew on an example in which a person desires her best friend’s partner, but also believes that if she pursued the desire, she’d hurt her friend. This conflict triggers an unconscious process that leads to an unconscious product: the desire becomes unconscious and the inner conflict is resolved.

Beate used an account of repression in which the state becomes access-unconscious. Access consciousness is usually tested for by asking the subject for a report. As Beate showed us, the process preceeding the report of a conscious stimulus is complex. It could comprise: some early visual processing, attention to this, categorization, storage in short term memory, semantic processing, and finally motor activation necessary for speech. There are many stages at which this process might be halted, all of which lead to failure to report a stimulus.

Beate then presented her model, in which we might often be driven by self-image concerns and internalized social norms to repress certain feelings so they end up not being categorised and ultimately available to access consciousness. However, these mental states still drive behaviour (for instance, they may be picked up by an Implicit Association Test). Beate concluded by demonstrating how her model solves a number of difficult problems in the self-deception literature, and outlined the need for a future research project to deliver a taxonomy of difference kinds of unconsciousness. 


In my talk, I wanted to explore the nature and measurement of the attitudes (often described as our sincerely held beliefs and values) that implicit attitudes are supposed to stand in contrast with. Implicit attitudes are usually postulated to explain systematically biased behaviours observed in both lab tests (e.g. time-pressured categorisation tasks like the Implicit Association Test – see an excellent discussion on recent issues here) or real world decision making (e.g. lawyers evaluating a piece of legal writing as in Reeves 2014: unbeknownst to the lawyers, their responses were collected by experimenters, who found that errors were more readily identified (in one and the same piece of writing) when the author was presumed to be black vs. white).

How do we know that the attitudes driving the observed biased behaviours aren’t regular old beliefs and values? Participants doing time-pressured categorisation tasks typically also answer a series of questions (“self-report questionnaires”) designed to measure their beliefs and values. Sometimes they are asked what sort of principles guide their decision making (as in Uhlmann and Cohen 2005).

Interestingly, there is a whole literature on “confabulation” - in which we answer questions sincerely, without the intention to deceive our audience, but end up saying something ill-grounded and false – which hasn’t been brought to bear on the measurement of explicit attitudes in implicit attitude studies. Lisa and I had the pleasure of co-editing a volume on this very phenomenon – see more here. Confabulation is more likely when there are cultural pressures to present oneself in a socially positive way. Can we be sure when people answer these explicit attitude reports they’re not confabulating? I argued no.

This doesn’t mean people therefore are confabulating on measures of explicit attitude, but it opens up the discussion of what exactly it is we are measuring on supposed explicit attitude measures, and whether it is right to think we are reliably capturing professed and sincerely held beliefs and values. For one thing, self-attributing the kind of general claims prevalent in explicit attitude measures comes very easy. Further, a claim to hold a particular explicit attitude at t1 does not rule out that an attitude inconsistent with the first might be evoked at a relatively close t2 (see the change blindness literature). I wonder whether good practice from the methodology of measuring implicit attitudes might be carried over to the measurement of explicit attitudes: namely, measuring to what extent explicit attitudes are borne out in behaviour in appropriate instances, and whether they are found to be stable over time. I wonder what we might find then?

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...