The blog post today is by Joel Krueger (University of Exeter) on his recent paper "Home as Mind: AI Extenders and Affective Ecologies in Dementia Care" (Synthese 2025).
![]() |
Joel Krueger |
AI is everywhere. Admittedly, much of the hype is overblown (AI fatigue is real; I feel it, too). Still, AI can do impressive things—and it’s already impacting our lives in many ways. Discussions in philosophy and beyond often focus on big issues like the looming possibility of artificial consciousness (very unlikely) and artificial general intelligence (also unlikely, despite what Sam Altman and other techbros keep insisting), or more immediate practical and ethical worries about job displacement, bias, privacy, environmental costs, and the potential for misuse.
Critical discussions like these are important. They help tamp down relentless hype cycles that get in the way of clear-eyed discussions about how AI-powered technology should fit into our lives. But while scepticism is warranted, it shouldn’t blind us to areas where AI holds significant promise, too.
In my recent Synthese paper, “Home as Mind: AI Extenders and Affective Ecologies in Dementia Care”, I explore one such area: potential applications of “AI extenders” to dementia care. According to Karina Vold and Jose Hernández-Orallo (2021), who came up with the term, “AI extenders” are AI-powered technologies—devices, wearables, apps and services, etc.—that augment human cognitive abilities in ways different from old-school extenders like notebooks, sketchpads, models, and microscopes (think of Otto and his trusty notebook in Andy Clark and David Chalmer’s seminal 1998 paper, “The Extended Mind”).
Cognitive extenders are powerful. As the Otto thought experiment demonstrates, they let agents do things they couldn’t otherwise do, cognitive speaking. But they still need a human user in the control loop. Otto’s notebook may extend his memory and dispositional beliefs, but it’s not doing anything until he picks it up to remember how to get to MoMA. However, AI extenders are different. They can do cognitive work on their own. Additionally, they can learn new tricks, and gain new abilities, without constant monitoring or training from human users. This “self-supervised learning” makes AI extenders potentially even more powerful—more epistemically and emotionally transformative—than (mere) cognitive extenders.
How does all this relate to dementia care? In the paper, I consider ways AI extenders might soon fluidly integrate into our surroundings. My focus is on AI extenders as ambiance: so thoroughly embedded into things and spaces that they create “ambient smart environments” which fade from view and seamlessly support people with dementia by doing cognitive and emotional work on their behalf. What makes AI extenders particularly promising here is that they can adapt and develop—over multiple timescales via self-supervised learning—to a user’s unique values, preferences, and behaviour. In so doing, they support independent living at home and may delay the transition to assisted care.
I sketch ways these AI extenders might work together to support things like safety and wellbeing, environmental control and comfort, and furnish cognitive, emotional, and social support. Not all the tech I discuss exists yet. But much of it does—and it’s improving rapidly. So, I argue that AI extenders and ambient smart environments may soon offer promising pathways for developing what Matilda Carter (2022) calls “non-dominating” care strategies. These strategies recognise that despite their cognitive decline, people with dementia are still capable of generating their own authentic values and interests—both of which must be respected, to the extent this is possible, when making care decisions and finding ways to empower their independence.
Of course, there are many reasons to be wary of AI and the motives of the tech companies developing it. I also consider a variety of worries, including security and safety risks, the possibility of misinterpretations and error, the role bias might play in rolling out and implementing this tech, concerns about privacy and diminished autonomy, and the threat of social isolation. These are all substantive concerns. Yet, despite these worries, AI holds promise. Philosophers therefore ought to be part of ongoing discussions and help decide how we might put it to work in just and beneficial ways.