A rising elderly population and diminishing number of family and professional carers has led to calls for the intervention of care robots. This leaves the quality of robot-delivered care to be determined by designers, for profit companies, nursing codes of practice and conduct, potential user sample groups, etc. What is missing is the carer who consciously makes good ethical decisions during practice. Good care is ‘determinative in practice’. That is, a carer can make good decisions because they are making them within the carer-patient relationship. If a robot is to be capable of good care ethics on the same level as humans, it needs to be conscious and able to make dynamic decisions in practice. Moreover, a care robot must conduct patient interactions in appropriate ways, tailored to the person in its care, at run-time. This is because good care, as well as being determinative in practice, is tailored to the individual. The introduction of robotic care determined by limited stakeholders leaves customised care in danger and instead could potentially turn the quality of elderly care into ‘elderly management’. This study introduces a new care robot framework—the attentive framework—which suggests using care centred value sensitive design (CCVSD) for the design process, as well as a computationally conscious information system (IS) to make practice-determinative decisions in run-time with extrinsic care value ordering. Although VSD has been extensively researched in the IS literature, CCVSD has not. The results of this study suggest that this new care robot framework, which is inspired by CCVSD, is competent in determining good, customised patient care at run-time. The contribution of this study is in its exploration of end-user willingness to trust known AI decisions and unwillingness to trust unknown AI decisions. Moreover, this study signifies the importance of, and desire for, good, customised robot-delivered care.
The way elderly care is delivered is changing. Attempts are being made to accommodate the increasing number of elderly, and the decline in the number of people available to care for them, with care robots. This change introduces ethical issues into robotics and healthcare. The two-part study (heuristic evaluation and survey) reported here examines a phenomenon which is a result of that change. The phenomenon rises out of a contradiction. All but 2 (who were undecided) of the 12 elderly survey respondents, out of the total of 102 respondents, wanted to be able to change how the presented care robot made decisions and 7 of those 12 elderly wanted to be able to examine its decision making process so as to ensure the care provided is personalized. However, at the same time, 34% of the elderly participants said they were willing to trust the care robot inherently, compared to only 16% of the participants who were under fifty. Additionally, 66% of the elderly respondents said they were very likely or likely to accept and use such a care robot in their everyday lives. The contradiction of inherent trust and simultaneous wariness about control gives rise to the phenomenon: elderly in need want control over their care to ensure it is personalized, but many may desperately take any help they can get. The possible causes, and ethical implications, of this phenomenon are the focus of this paper.
A relatively new area within information systems is the design of robotic healthcare. This narrative review considers the question, how does one ethically design an elderly care robot? To answer this question, robot ethicists consider the ethical impact of robots, how designers ought to design robots ethically, and how a robot design ought to be, so its behaviour is ethical. The latter consideration defines another field of study, machine ethics. Machine ethicists ask, how does one design a robot information system to behave ethically? Thus, robot ethics is concerned with the ethics of design practice, whereas machine ethics is concerned with the ethics of the product designed. The findings from this narrative review point the way forward to how one can answer both questions with a new design approach that is grounded in care and professional ethics, value sensitive design, and the integration of two machine ethics schools of thought.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.