Recognizing and responding to human affect is important in collaborative tasks in joint human-robot teams. In this paper we present an integrated affect and cognition architecture for HRI and report results from an experiment with this architecture that shows that expressing affect and responding to human affect with affect expressions can significantly improve team performance in a joint human-robot task.
Trust in human-robot interactions (HRI) is measured in two main ways: through subjective questionnaires and through behavioral tasks. To optimize measurements of trust through questionnaires, the field of HRI faces two challenges: the development of standardized measures that apply to a variety of robots with different capabilities, and the exploration of social and relational dimensions of trust in robots (e.g., benevolence). In this paper we look at how different trust questionnaires [18,30,35] fare given these challenges that pull in different directions (being general vs. being exploratory) by studying whether people think the items in these questionnaires are applicable to different kinds of robots and interactions. In Study 1 we show that after being presented with a robot (non-humanoid) and an interaction scenario (fire evacuation), participants rated multiple questionnaire items such as "This robot is principled" as "Non-applicable to robots in general" or "Non-applicable to this robot. " In Study 2 we show that the frequency of these ratings change (indeed, even for items rated as N/A to robots in general) when a new scenario is presented (game playing with a humanoid robot). Finally, while overall trust scores remained robust to N/A ratings, our results revealed potential fallacies in the way these scores are commonly interpreted. We conclude with recommendations for the development, use and results-reporting of trust questionnaires for future studies, as well as theoretical implications for the field of HRI.
CCS CONCEPTS• Human-centered computing → Interaction paradigms; • Computer systems organization → Robotics.
Among the many anticipated roles for robots in the future is that of being a human teammate. Aside from all the technological hurdles that have to be overcome with respect to hardware and control to make robots fit to work with humans, the added complication here is that humans have many conscious and subconscious expectations of their teammates -indeed, we argue that teaming is mostly a cognitive rather than physical coordination activity. This introduces new challenges for the AI and robotics community and requires fundamental changes to the traditional approach to the design of autonomy. With this in mind, we propose an update to the classical view of the intelligent agent architecture, highlighting the requirements for mental modeling of the human in the deliberative process of the autonomous agent. In this article, we outline briefly the recent efforts of ours, and others in the community, towards developing cognitive teammates along these guidelines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.