Assistive robots are emerging to address a social need due to changing demographic trends such as an ageing population. The main emphasis is to offer independence to those in need and to fill a potential labour gap in response to the increasing demand for caregiving. This paper presents work undertaken as part of a dressing task using a compliant robotic arm on a mannequin. Several strategies are explored on how to undertake this task with minimal complexity and a mix of sensors. A Vicon tracking system is used to determine the arm position of the mannequin for trajectory planning by means of waypoints. Methods of failure detection were explored through torque feedback and sensor tag data. A fixed vocabulary of recognised speech commands was implemented allowing the user to successfully correct detected dressing errors. This work indicates that low cost sensors and simple HRI strategies, without complex learning algorithms, could be used successfully in a robot assisted dressing task.
For robots that can provide physical assistance, maintaining synchronicity of the robot and human movement is a precursor for interaction safety. Existing research on collaborative HRI does not consider how synchronicity can be affected if humans are subjected to cognitive overloading and distractions during close physical interaction. Cognitive neuroscience has shown that unexpected events during interactions not only affect action cognition but also human motor control Gentsch et al. (Cognition, 2016, 146, 81–89). If the robot is to safely adapt its trajectory to distracted human motion, quantitative changes in the human movement should be evaluated. The main contribution of this study is the analysis and quantification of disrupted human movement during a physical collaborative task that involves robot-assisted dressing. Quantifying disrupted movement is the first step in maintaining the synchronicity of the human-robot interaction. The human movement data collected from a series of experiments where participants are subjected to cognitive loading and distractions during the human-robot interaction, are projected in a 2-D latent space that efficiently represents the high-dimensionality and non-linearity of the data. The quantitative data analysis is supported by a qualitative study of user experience, using the NASA Task Load Index to measure perceived workload, and the PeRDITA questionnaire to represent the human psychological state during these interactions. In addition, we present an experimental methodology to collect interaction data in this type of human-robot collaboration that provides realism, experimental rigour and high fidelity of the human-robot interaction in the scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.