Skip to main content

What’s wrong with the computer analogy?

Today's post is by Harriet Fagerberg at King’s College London & Humboldt-Universität zu Berlin on her recent paper “Why mental disorders are not like software bugs” (forthcoming, Philosophy of Science).

What, if anything, is the difference between mental disorders and brain disorders? Are mental disorders brain disorders? If not, are they disorders at all? According to one prominent view in the philosophy of psychiatry, mental dysfunction does not entail brain dysfunction just as software dysfunction does not entail hardware dysfunction in a classical computer. Wakefield writes: 

It is true that every software malfunction has some hardware description; that is not at issue. Rather, the point is that a software malfunction need not be a physical hardware malfunction. Analogously, even if all mental states are physical states, it does not follow that a mental dysfunction is a physical dysfunction. (p. 129, Wakefield, 2006; see also Papineau, 1994)

Nevertheless, because dysfunctions count as medical disorders (per the natural dysfunction analysis of medical disorder) purely mental dysfunctions still count as real disorders. Thus, we get real mental disorders, without brain dysfunction, and without appealing to some spooky dualism about the mental. 

The argument from the computer analogy is both intuitive and appealing. However, as I argue in ‘Why mental disorders are not like software bugs’, it is also unsound. The argument from the computer analogy rests on the false premise that mind-brain is analogous to software-hardware in all relevant ways. In fact, there is an important disanalogy between mind-brain and software hardware: software functions need not be hardware functions, but mental functions are brain functions.  

The etiological theory of function, on which the natural dysfunction account rests, states that F is a function of X iff F is a selected effect of X. 

We can now ask, are all software functions selected effects of the hardware? It seems not. We can imagine a scenario in which the hardware designers had no idea that the hardware they were designing would eventually come to run a word processer. Thus, if there is an error in the code which prohibits (say) the deletion of text, then this is compatible with the hardware doing everything it was designed to do. The hardware was just designed to run code – and it is doing this correctly. 

Mental functions, on the contrary, are necessarily selected effects of the brain. The only way in which a mental function can be configured into the mind via evolution is by being causally efficacious in the natural selection of the implementing organ – i.e. the brain. There is not pre-neural ‘mindware’ designer through which purely mental norms of operation may arise. It follows that mental functions are brain functions. Accordingly, should one fail, that failure would constitute a brain dysfunction – whether or not we can determine this from physical facts alone. 

In this sense, mental disorders really aren’t like software bugs. 

Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...