Skip to main content

Home as Mind: AI Extenders and Affective Ecologies in Dementia Care

The blog post today is by Joel Krueger (University of Exeter) on his recent paper "Home as Mind: AI Extenders and Affective Ecologies in Dementia Care" (Synthese 2025).

 

Joel Krueger

AI is everywhere. Admittedly, much of the hype is overblown (AI fatigue is real; I feel it, too). Still, AI can do impressive things—and it’s already impacting our lives in many ways. Discussions in philosophy and beyond often focus on big issues like the looming possibility of artificial consciousness (very unlikely) and artificial general intelligence (also unlikely, despite what Sam Altman and other techbros keep insisting), or more immediate practical and ethical worries about job displacement, bias, privacy, environmental costs, and the potential for misuse.

Critical discussions like these are important. They help tamp down relentless hype cycles that get in the way of clear-eyed discussions about how AI-powered technology should fit into our lives. But while scepticism is warranted, it shouldn’t blind us to areas where AI holds significant promise, too.
    
In my recent Synthese paper, “Home as Mind: AI Extenders and Affective Ecologies in Dementia Care”, I explore one such area: potential applications of “AI extenders” to dementia care. According to Karina Vold and Jose Hernández-Orallo (2021), who came up with the term, “AI extenders” are AI-powered technologies—devices, wearables, apps and services, etc.—that augment human cognitive abilities in ways different from old-school extenders like notebooks, sketchpads, models, and microscopes (think of Otto and his trusty notebook in Andy Clark and David Chalmer’s seminal 1998 paper, “The Extended Mind”).

Cognitive extenders are powerful. As the Otto thought experiment demonstrates, they let agents do things they couldn’t otherwise do, cognitive speaking. But they still need a human user in the control loop. Otto’s notebook may extend his memory and dispositional beliefs, but it’s not doing anything until he picks it up to remember how to get to MoMA. However, AI extenders are different. They can do cognitive work on their own. Additionally, they can learn new tricks, and gain new abilities, without constant monitoring or training from human users. This “self-supervised learning” makes AI extenders potentially even more powerful—more epistemically and emotionally transformative—than (mere) cognitive extenders.

How does all this relate to dementia care? In the paper, I consider ways AI extenders might soon fluidly integrate into our surroundings. My focus is on AI extenders as ambiance: so thoroughly embedded into things and spaces that they create “ambient smart environments” which fade from view and seamlessly support people with dementia by doing cognitive and emotional work on their behalf. What makes AI extenders particularly promising here is that they can adapt and develop—over multiple timescales via self-supervised learning—to a user’s unique values, preferences, and behaviour. In so doing, they support independent living at home and may delay the transition to assisted care.  

I sketch ways these AI extenders might work together to support things like safety and wellbeing, environmental control and comfort, and furnish cognitive, emotional, and social support. Not all the tech I discuss exists yet. But much of it does—and it’s improving rapidly. So, I argue that AI extenders and ambient smart environments may soon offer promising pathways for developing what Matilda Carter (2022) calls “non-dominating” care strategies. These strategies recognise that despite their cognitive decline, people with dementia are still capable of generating their own authentic values and interests—both of which must be respected, to the extent this is possible, when making care decisions and finding ways to empower their independence.     

Of course, there are many reasons to be wary of AI and the motives of the tech companies developing it. I also consider a variety of worries, including security and safety risks, the possibility of misinterpretations and error, the role bias might play in rolling out and implementing this tech, concerns about privacy and diminished autonomy, and the threat of social isolation. These are all substantive concerns. Yet, despite these worries, AI holds promise. Philosophers therefore ought to be part of ongoing discussions and help decide how we might put it to work in just and beneficial ways.   


Popular posts from this blog

Delusions in the DSM 5

This post is by Lisa Bortolotti. How has the definition of delusions changed in the DSM 5? Here are some first impressions. In the DSM-IV (Glossary) delusions were defined as follows: Delusion. A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary. The belief is not one ordinarily accepted by other members of the person's culture or subculture (e.g., it is not an article of religious faith). When a false belief involves a value judgment, it is regarded as a delusion only when the judgment is so extreme as to defy credibility.

Rationalization: Why your intelligence, vigilance and expertise probably don't protect you

Today's post is by Jonathan Ellis , Associate Professor of Philosophy and Director of the Center for Public Philosophy at the University of California, Santa Cruz, and Eric Schwitzgebel , Professor of Philosophy at the University of California, Riverside. This is the first in a two-part contribution on their paper "Rationalization in Moral and Philosophical thought" in Moral Inferences , eds. J. F. Bonnefon and B. Trémolière (Psychology Press, 2017). We’ve all been there. You’re arguing with someone – about politics, or a policy at work, or about whose turn it is to do the dishes – and they keep finding all kinds of self-serving justifications for their view. When one of their arguments is defeated, rather than rethinking their position they just leap to another argument, then maybe another. They’re rationalizing –coming up with convenient defenses for what they want to believe, rather than responding even-handedly to the points you're making. Yo...

A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind

Today's post is by  Karen Yan (National Yang Ming Chiao Tung University) on her recent paper (co-authored with Chuan-Ya Liao), " A co-citation analysis of cross-disciplinarity in the empirically-informed philosophy of mind " ( Synthese 2023). Karen Yan What drives us to write this paper is our curiosity about what it means when philosophers of mind claim their works are informed by empirical evidence and how to assess this quality of empirically-informedness. Building on Knobe’s (2015) quantitative metaphilosophical analyses of empirically-informed philosophy of mind (EIPM), we investigated further how empirically-informed philosophers rely on empirical research and what metaphilosophical lessons to draw from our empirical results.  We utilize scientometric tools and categorization analysis to provide an empirically reliable description of EIPM. Our methodological novelty lies in integrating the co-citation analysis tool with the conceptual resources from the philosoph...