Healthcare Robots
eBook - ePub

Healthcare Robots

Ethics, Design and Implementation

Aimee van Wynsberghe

  1. 166 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Healthcare Robots

Ethics, Design and Implementation

Aimee van Wynsberghe

Book details
Book preview
Table of contents
Citations

About This Book

This study deals with an underexplored area of the emerging technologies debate: robotics in the healthcare setting. The author explores the role of care and develops a value-sensitive ethical framework for the eventual employment of care robots. Highlighting the range of positive and negative aspects associated with the initiative to design and use care robots, it draws out essential content as a guide to future design both reinforcing this study's contemporary relevance, and giving weight to its prescriptions. The book speaks to, and is meant to be read by, a range of disciplines from science and engineering to philosophers and ethicists.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Healthcare Robots an online PDF/ePUB?
Yes, you can access Healthcare Robots by Aimee van Wynsberghe in PDF and/or ePUB format, as well as other popular books in Politik & Internationale Beziehungen & Essays in Politik & Internationale Beziehungen. We have over one million books available in our catalogue for you to explore.

Chapter 1
Designing Care Robots with Care

Introduction

Imagine an elderly woman – let’s call her Margaret – lives alone in her home after the death of her beloved spouse. Margaret would prefer to remain in the comfortable and familiar surroundings of her own home for as long as possible to continue with some independence and dignity. Margaret requires monitoring as her health is declining with age. To avoid moving to a nursing home or other care facility, Margaret has purchased a home-care robot that can: remind her to take her medications; fetch items for her if she is too tired or is already in bed; help with simple cleaning tasks; and, can facilitate her staying in contact with her family, friends and healthcare provider via video chat. The robot holds the promise of independence for Margaret while maintaining a high level of care in the comfort of her own home.
The above scenario depicts the potential of home-care robots for one particular demographic and although it is not technologically feasible at this moment in time, the hope is that it will be realized in the not too distant future. As we come into the twenty-first century, the challenges of providing care for the elderly as well as the population in general will be constrained by: a lack of care resources, competition for healthcare services, and shortages of care personnel. Increasingly, policy makers and healthcare providers are turning their attention to robots as one solution among others, to meet the needs of patients in light of these foreseen challenges.
These robots, called care robots, are currently being designed and developed to be integrated into morally charged contexts like a hospital, nursing home or home-care setting. Based on the contexts within which they will be placed and the roles they will be assigned, roboticists claim that robots ought to be endowed with moral reasoning capabilities. In other words, the robot can decide what to do based on some conception of good/bad, right/wrong. This claim is incredibly problematic when we consider the relationship between moral agency and moral responsibility: a moral agent must bear moral responsibility for the consequences of his/her actions.
The issue of responsibility is of the utmost importance in healthcare contexts and in the therapeutic relationship: a human care-giver must be morally responsible for the outcome of care actions. The professionalization of medicine and nursing is grounded on this fact. This begs the question whether or not a robot can be morally responsible for the outcome of its actions and also whether or not we can consider a robot to be a moral agent. One of the things I will argue in this book is that the robot is not a moral agent but a moral factor given its impact on the moral decision making of the human care givers. This argument has significant repercussions for the design of robots in care contexts: they should be intentionally designed to avoid roles which require moral responsibility. This is only one of the many reasons why it is imperative that care robots undergo rigorous ethical reflection and evaluation.
Evaluating care robots is complicated for a variety of reasons; we must examine the question of “how” to evaluate (which ethical theory to apply or indeed if there is one theory that is sufficient), as well as the question of “what” to evaluate (the initiative to use care robots, their design, or their introduction) and, at the same time, we must examine and untangle the ethically good from the ethically bad uses.
The introduction of this work gave an overview of how care robots are seen to be beneficial in care as well as how their use raises ethical concerns. Accordingly, the question to ask is not whether or not we should make them, but how they should be made, and what they ought to be used for. I do not deny the development of this technology nor do I support the blind acceptance and use of this technology; rather, I am seeking a way in which the technology can be designed and made so that it can support widely held cultural care values.
In this book I suggest a way in which care robots can be ethically evaluated in a purposeful manner through an analysis of the relationship between robot design and the realization of care values through the use of the robot. Perhaps more importantly, I am also suggesting and arguing in favour of a more proactive stance: that care robots ought to be designed in a way that ensures the core care values are integrated in the design process and embedded into their design (i.e. their technical content). In this way it is hoped that the care robot is used in a way that realizes care values.
My suggestion for designing care robots with care relies on a normative approach to design that I call the Care-Centred Value-Sensitive Design Approach (CCVSD). I emphasize with care as a way of indicating that these care robots will be, and ought to be, designed in a manner that acknowledges the impact they will have on the provision of good care. Moreover, designing with care also refers to the fact that designers are intentionally working to embed care values into a robot’s technical content.
This book is dedicated to presenting the theoretical justification for the CCVSD approach as well as demonstrating its utility. I aim to show how the approach can be used in a retrospective manner – to evaluate current care robots – and how the approach can be used prospectively – to steer the design of future care robots from the moment of idea generation up to the moment of implementation. But first, let us have a look at the technology we are discussing.

What is a Care Robot?

There is much confusion surrounding robots in terms of how they are defined and what they are currently capable of. This is due, in part, to the technical knowledge required to understand their functioning but also due to the role the media has played in shaping the image of a robot in the minds of society. The image given by the media, represented by Star Wars’ charming C-3PO, Star Trek’s endearing Data, and Pixar’s adorable WALL-E all represent a class of robots not yet realized by today’s technology. Current applications of robots are not capable of interacting in this social way (charming, endearing or adorable) as depicted in the media. Researchers are, however, deeply engrossed in the pursuit of creating robots that will one day communicate in this way.
In terms of defining care robots, there is not one capability, appearance, or function that is exclusive to a care robot. Care robots may be used in the home, hospital, nursing home or other setting. They may be used to assist in, support, or provide care for the elderly or otherwise vulnerable persons by providing assistance in care-giving tasks, monitoring a patient’s health status and/or providing social care or companionship (Sharkey and Sharkey, 2012). Care robots may have any number and range of capabilities from planar locomotion (vs. stationary) to voice, facial or emotion recognition. They may appear machine-like, animal- or creature-like, or human/humanoid-like. Additionally, they may have varying degrees of autonomy – the amount of human involvement required for the robot to complete its tasks.
Today’s commercially available healthcare robots include surgical robots (e.g. the daVinci® surgical system), delivery robots (e.g. the TUG® and HelpMateTM robots) and the Paro robot for social interactions. Robots in the testing phases include: rehabilitation robots for stroke survivors, robots to assist with lifting (e.g. the RI-MAN or RIBA [Robot for Interactive Body Assistance] robot to pick up patients and move them from one place to another), robots for bathing disabled patients (e.g. the bathing cabin robot for automatic washing and rinsing used in the Horsens Kommune), and robots for feeding patients (e.g. the Secom MySpoon robot) among others. For the roboticist, there is no limit to what a robot will be capable of in the future.
What connects all these devices is that they are machines capable of carrying out a complex series of actions automatically, usually programmed by computer scientists and engineers (i.e. roboticists). Thus, they have physical embodiment (they are not virtual characters), can act in their environment based on information they have sensed, and they have some range of automation. Although the development of robots used in a healthcare context is still in its infancy there are big plans for the future. Make no mistake, the robots are coming! The question then is: what will this new technology do to the age-old practice of care-giving?

Creating a Framework for the Ethical Evaluation of Care Robots: The Concept of Embedded Values

In this book I presuppose that ethics ought to be incorporated into the design process of robots (van Wynsberghe, 2013a; van Wynsberghe and Robbins, 2014; van Wynsberghe, 2014). This is in contrast to the current work of ethicists and robot scholars who address the ethical concerns retrospectively: once the robot has been made and has a clear application/use (Sharkey and Sharkey, 2011; Sparrow and Sparrow, 2006). The limitation of such an analysis is that the impact of ethics is limited to the implementation exclusively, and does not include any ethical impact from the resulting design.
Contemporary computer ethics and ethics of technology teach us that there are values embedded into the design of technologies (Friedman et al., 2006; Nissenbaum, 2011; Brey, 2014). This is known as the embedded values concept. An embedded value refers to value as the consequence of using a technology or artefact. If we take a value to be something good, something desirable, something that we want to have happen, then a value in the embedded sense refers to a good consequence of using a technology. As an example, the value of privacy is a consequence of using the phone capabilities (which do not allow for the tracking or tracing of phone conversations) provided by the company Silent Circle.
Of course one must acknowledge that a technology can promote a value while at the same time limit (or prohibit) the promotion of another competing value. Added to this a technology can have more than one use (i.e. dual-use). Take the example of Silent Circle given above. The technology is intended to be used by those in dangerous countries or abusive situations requiring privacy and protection. Although this is the intended use it is still possible that the technology might conceivably be used by unintended users for deplorable uses.
The argument remains that values are manifest during and through, the use of a technology. From this idea certain researchers have concluded that if a technology can realize a value (i.e. can bring a value into existence) then we should be able to intentionally design technologies to realize specified values of ethical importance. This idea led to the approach known as Value-Sensitive Design (VSD) (Friedman et al., 2006). Value Sensitive Design is a computer ethics approach dedicated to systematically incorporating a list of 12 ethical values into computer systems.
Value Sensitive Design has been praised for its ability to account for values in the design of computer systems as well as other technologies. It is the first approach that targets the design process as the place for value analysis and demands that the relationship between the design/use of a technology and the realization of values be considered. The approach consists of a tri-partite methodology to include conceptual, technical and empirical investigations. Conceptual analysis refers to an understanding of the value constructs in a philosophical sense as well as a practical one, i.e. in the context of use. Technical investigations refer to how the technology will work in context and the relationship with values. Empirical investigations involve questioning stakeholders about their preferences and behaviours with respect to the context in which the technology will be used. There are a range of methods for eliciting stakeholder involvement and uncovering the values of importance (e.g. value scenarios, envisioning workshop). There are also methods for identifying the values that cannot be limited or the ones considered the most important to stakeholders. These are referred to as value dams and flows respectively. These investigations are iterative rather than linear.
The approach I develop here, the CCVSD approach, relies on the starting point and basic skeleton of VSD but with notable differences. For starters, I am not creating a robot and thus my technical investigation will not involve actual experiments of users with the technology in a context of use. I do, however, deal with the technical content of the robot in great descriptive detail. Second, as I mentioned in the introduction I do not embark on stakeholder analysis to gather user preferences. Instead, the CCVSD approach generates a set of normative standards to follow rather than outlining the range of user preferences for a technology.
The CCVSD approach also shares the iterative aspect of VSD. In practice, the conceptual investigation of values in context alongside their embedding into a technology are overlapping practices that cannot be separated. Relating components to one another, as I will do, only strengthens the fluidity and consistency of the approach.
As with any new approach, alongside the strength and popularity of VSD come various criticisms. The criticisms centre on: the lack of a normative foundation for ethical evaluation (Manders-Huits, 2011); the lack of clarity surrounding the concept of value and/or embedded value (Van de Poel, 2009); the lack of understanding of how values are translated into design requirements (Van de Poel and Kroes, 2014); and, the disconnect between the intended values of engineers and the values realized once the artefact is used in context (ibid.). I will address each of these criticisms individually in considerable detail throughout the chapters of this book.
With respect to the first criticism, the lack of a normative foundation, the fear is that VSD relies on intuition rather than on an ethical tradition to ground it normatively. To mitigate this concern I use the care ethics tradition to provide the normative foundation for making any claim with respect to the robot’s design. For this reason, among others, I refer to the approach developed here as Care Centred – essentially placing care ethics as a focal point in the approach.

Designing Care Robots for Care

To give a brief overview of the CCVSD approach, it consists of a framework of components of ethical significance (see Table 1.1) along with a “user manual” for evaluations. The framework, what I refer to as the Care-Centred framework, is a list of components to take into consideration in the evaluation of a care robot: the context of use, the care practice, the actors involved, the type of care robot (its capabilities, appearance etc.) and the list of values involved for the described practice in the stated context (i.e. the interpretation and prioritization of care values). The framework orients the ethicist and design team to the ethical issues demanding attention from the care ethics perspective.
The ethical questions and issues for different robots used in different practices by different users are going to be varied; however, this does not undermine that there are certain components that must be addressed in every instance in which a care robot is used. In every instance we must understand who the direct and indirect actors involved are and how they will be impacted. In every instance we must understand the care practice in terms of how values come into being (rather than understanding the practice in mechanical terms only). In every instance we must understand the context we are speaking of and the relationship this context has with the interpretation and expression of values. Accordingly, the framework is meant to draw the design team’s attention to certain components and to show them how they are to deal with these components. The framework is not intended to say that each robot will undergo the same evaluation but rather, each robot will be evaluated according to the same criteria (i.e. the components of the framework).
Table 1.1 Care-centred framework for the ethical evaluation of care robots

Context – hospital (and ward) vs. nursing home vs. home setting …
Practice – lifting vs. bathing vs. feeding vs. delivery of food and/or sheets, collection of samples, playing games, etc. …
Actors involved – human (e.g. nurse, patient, cleaning staff, other personnel) and nonhuman (e.g. care room, mechanical bed, curtain, wheelchair, mechanical left, robot …)
Type of robot – assistive vs. enabling vs. replacement
Manifestation of moral elements – Attentiveness, responsibility, competence, responsiveness

Note: Table also found in van Wynsberghe 2013 and van Wynsberghe 2014.
Each of these elements have been judiciously chosen based on an analysis of the necessary and sufficient fundamentals for good care. This analysis is done from the care ethics perspective in Chapter 2. A detailed explanation of the elements and the justification for their place in the framework is provided in Chapter 5.

Why Design?

Discussing robots in terms of their “design” and the “design process” from which they result, demands an understanding of what I mean by both design and design process. By design I neither refer exclusively to the external appearance of the robot nor exclusively to the software programming of the robot; rather, to a combination of the appearance and capabilities of the robot. Of course the capabilities of the robot result from the programmed computer code and thus programming is subsumed within the element of capabilities. Appearance refers to the robot being humanoid, machine-like and/or creature-like as well as the morphology of the robot – the form and structure of the robot (e.g. does the robot have arms, does it have legs or wheels etc.).
In contrast, Feng and Feenburg describe “design” as a “process of consciously shaping an artefact to adapt to its goals and environments” (Feng and Feenberg, 2008, p. 105). This process of shaping the artefact is what I refer to here as the design process and not to design. My insistence to focus on design and the design process separately and as one being the process and the other the result of said process, rests predominantly on the relationship between artefacts and morality conceptualized in the ethics of technology and Science and Technology Studies (STS) domains.

Design and Morality

For some, artefacts are believed to have a kind of morality. Oosterlaken conceptualizes this morality in terms of a technology’s ability to “expand capabilities” (Oosterlaken, 2009). This morality, or moral impact if you will, is a result both of the designers’ intentional decisions as well as the technology’s place within a network. I reference the term “network” intentionally to relate to Latour’s approach known as Actor-Network Theory (ANT). For Latour, a network describes an amalgamation of human and non-human actors which interact together for: moral decision making, establishing norms and meanings and, determining outcomes. Actors are both human and non-human, thus a robot may also be considered an actor.
For some scholars in the field of STS, the morality of the artefact is accounted for through the phenomenon known as domestication; in short, the impact the technology has once it becomes an actor in a network of other human and non-human actors. Hence, domestication studies build on the concept of the network and the interactions between human and non-human actors (the material environment). This impact is observed and studied in terms of the meaning the technology takes on, how this meaning is established, how the technology propagates or alters existing norms, and lastly, in terms of how the technology prioritizes and i...

Table of contents

Citation styles for Healthcare Robots

APA 6 Citation

Wynsberghe, A. van. (2016). Healthcare Robots (1st ed.). Taylor and Francis. Retrieved from https://www.perlego.com/book/1633429/healthcare-robots-ethics-design-and-implementation-pdf (Original work published 2016)

Chicago Citation

Wynsberghe, Aimee van. (2016) 2016. Healthcare Robots. 1st ed. Taylor and Francis. https://www.perlego.com/book/1633429/healthcare-robots-ethics-design-and-implementation-pdf.

Harvard Citation

Wynsberghe, A. van (2016) Healthcare Robots. 1st edn. Taylor and Francis. Available at: https://www.perlego.com/book/1633429/healthcare-robots-ethics-design-and-implementation-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Wynsberghe, Aimee van. Healthcare Robots. 1st ed. Taylor and Francis, 2016. Web. 14 Oct. 2022.