An important aspect of a robot's social behavior is to convey the right amount of trustworthiness. Task performance has shown to be an important source for trustworthiness judgments. Here, we argue that factors such as a robot's behavioral style can play an important role as well. Our approach to studying the effects of a robot's performance and behavioral style on human trust involves experiments with simulated robots in video human-robot interaction (VHRI) and immersive virtual environments (IVE). Although VHRI and IVE settings cannot substitute for the genuine interaction with a real robot, they can provide useful complementary approaches to experimental research in social human robot interaction. VHRI enables rapid prototyping of robot behaviors. Simulating human-robot interaction in IVEs can be a useful tool for measuring human responses to robots and help avoid the many constraints caused by real-world hardware. However, there are also difficulties with the generalization of results from one setting (e.g., VHRI) to another (e.g. IVE or the real world), which we discuss. In this paper, we use animated robot avatars in VHRI to rapidly identify robot behavioral styles that affect human trust assessment of the robot. In a subsequent study, we use an IVE to measure behavioral interaction between humans and an animated
Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) that it is also dangerous, that is, that we should not trust AI—particularly if the stakes are as high as they routinely are in medicine. In this paper, we aim to defend a notion of trust in the context of medical AI against both charges. To do so, we highlight the technically mediated intentions manifest in AI systems, rendering trust a conceptually plausible stance for dealing with them. Based on literature from human–robot interactions, psychology and sociology, we then propose a novel model to analyse notions of trust, distinguishing between three aspects: reliability, competence, and intentions. We discuss each aspect and make suggestions regarding how medical AI may become worthy of our trust.
The present research was aimed at investigating whether human-robot interaction (HRI) can be improved by a robot's nonverbal warning signals. Ideally, when a robot signals that it cannot guarantee good performance, people could take preventive actions to ensure the successful completion of the robot's task. In two experiments, participants learned either that a robot's gestures predicted subsequent poor performance, or they did not. Participants evaluated a robot that uses predictive gestures as more trustworthy, understandable, and reliable compared to a robot that uses gestures that are not predictive of their performance. Finally, participants who learned the relation between gestures and performance improved collaboration with the robot through prevention behavior immediately after a predictive gesture. This limits the negative consequences of the robot's mistakes, thus improving the interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.