Tuesday, 17 July 2018

Confabulation and Rationality of Self-knowledge

Sophie Keeling is currently a philosophy PhD student at the University of Southampton. She primarily works on self-knowledge which has allowed her to research a range of topics in epistemology, philosophy of mind, and philosophy of psychology. Sophie’s thesis argues that we have a distinctive way of knowing why we have our attitudes and perform actions that observers lack. She gives a brief overview here.

This post summarises my paper ‘Confabulation and Rational Requirements for Self-Knowledge’ (forthcoming in Philosophical Psychology). The paper argues for a novel explanation of confabulation:

Confabulation is motivated by the desire to have fulfilled a rational obligation to knowledgeably explain our attitudes by reference to motivating reasons.

(Following others in the epistemological literature, I term the reason for which we hold an attitude our ‘motivating reason’ for it).

I shan’t seek to define confabulation here (a task in its own right) but instead note the subtype I’ll explain. I’m interested in cases whereby subjects falsely explain their attitudes (e.g. beliefs, desires, preferences) in response to prompting. We see a paradigm example of this in Nisbett and Wilson’s (1977) experiment in which they arranged four pairs of identical stockings on a table and asked individuals which they preferred and why. Subjects picked a pair generally towards the right of the table. Instead of noting the real cause of their preference – the position of the tights – or admitting ignorance, subjects gave incorrect explanations. That is, they confabulated an answer, such as the pair’s supposedly superior ‘knit, sheerness, and weave’. Indeed, this is a commonplace phenomenon. We’ve all at one point adopted a stance which we’ve rationalised after the fact. (E.g. I kid myself that I prefer the expensive branded yogurt over the supermarket offering because it’s tastier, and nothing to do with the clever marketing).

The paper then introduces three explananda for our explanation of this phenomenon, and argues that the two main options in the literature fail to account for all these. For example, confabulation is first-personal – we make these sorts of mistakes more readily with ourselves than others. (Here I draw on work such as Pronin et al. 2002 concerning the ‘bias blind spot’). Yet some accounts (e.g. Nisbett and Wilson 1977, Carruthers 2013, and Cassam 2014) struggle to address this important asymmetry in our mistaken self-ascriptions.

I propose an explanation which does account for all three explananda. It appeals to what I call the knowledgeable reasons explanation (KRE) obligation:

The obligation to knowledgeably self-ascribe motivating reasons when explaining one’s own attitude.

We shouldn’t confuse this rational obligation with moral ones. It just captures the thought that I ought to, for example, explain my belief that it will rain by citing a motivating reason, such as the weather forecast. That we bear the KRE obligation is independently plausible: I seem to be doing something irrational and criticisable if I instead answer the question ‘why?’ with ‘no reason’, ‘I don’t know’ or ‘I’m generally pessimistic’.

I use KRE in the following explanation:

We confabulate, and indeed confabulate with the content we do, because we desire to have fulfilled the KRE obligation (i.e. the obligation to knowledgeably explain our attitudes by reference to motivating reasons)

We can now explain the stockings experiment in the following way. The desire to have fulfilled the KRE obligation leads the subjects to confabulate an answer in the absence of a true one they can provide – they did not form their preference on the basis of reasons. And further, they specifically self-ascribe the reason that the stockings were sheerer, say, because it is a plausible motivating reason. This proposal accounts for the explananda in a non-ad-hoc way. For example, confabulation is first-personal because we desire to have fulfilled the obligation to knowledgeably explain our own attitudes by reference to motivating reasons, not other people’s.

The final section raises an upshot for understanding self-knowledge. Contrary to popular assumption, confabulation cases give us reason to think we have distinctive access to why we have our attitudes. What exactly our special access amounts to, though, must be left for further papers!

Thursday, 12 July 2018

Political Epistemology

On 10th and 11th May in Senate House London Michael Hannon and Robin McKenna hosted a two-day conference on Political Epistemology, supported by the Mind Association, the Institute of Philosophy, and the Aristotelian Society. In this report I focus on two talks that addressed themes relevant to project PERFECT.

Robert Talisse

On day 1, Robert Talisse explained what is troubling with polarisation. In the past Talisse developed an account of the epistemic value of democracy in terms of epistemic aspirations (rather than democratic outcomes). In a slogan, "the ethics of belief lends support to the ethos of democracy". We can see this when we think about polarisation.

There are two senses of polarisation: (1) political polarisation and (2) belief (or group) polarisation. Political polarisation is the dropping out of the middle ground between opposed ideological stances. That means that opposed stances have fewer opportunities to engage in productive conversations. Belief polarisation instead is something that happens in like-minded political groups and concerns the doxastic content of people's beliefs. People tend to adopt a more extreme version of the belief they originally have when they discuss the content with like-minded people.

The problem is that the radicalisation of one's views does not depend on acquiring more or better reasons for one's original views, but on the social dynamics that is relevant to group discussion. Should people then discuss their views only with their opponents? Not really, as empirical evidence suggests that heterogeneous deliberation inhibits political participation.

What is wrong about belief polarisation and how can we address the problem? Belief polarisation impacts not only the content of the belief or the confidence about the belief, but one's estimation of the people who have opposed beliefs. So the belief-polarised person becomes increasingly unable to see nuances in the opposing view. Moreover, more and more of the behaviours of the opponents are seen in the light of their political views, and the opponents are seen as diseased or corrupted.

Finally, once the belief-polarised person knows that an expert has a different political view, then the opinion of the expert is rejected, even if the expert advice does not concern their political stance. Almost as if a sense of ideological purity compromises people's capacity to trust experts with different political views.

How can we overcome such challenges? Preventing belief polarisation is different from depolarising beliefs. More democracy may be good for prevention of belief polarisation. But once people are belief-polarised then more democracy does not seem to help. Maybe we sometimes need less democracy! Exposure to the other side entrenches polarisation.

A range of non-political behaviours and social spaces (consumer behaviours, community centres, workplaces, religious affiliations) become expressions of ideological stances which means that people are less and less likely to mix with people who have opposed political views. Humanising interactions across political divides are increasingly less likely to happen. This is due to the political saturation of social space.

So one possible solution is to carve out social spaces that are not already politically saturated. There must be activities where political affiliations do not matter.