In June 2015, the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge (DRC) Finals were held in Pomona, California. The DRC Finals served as the third phase of the program designed to test the capabilities of semi-autonomous, remote humanoid robots to perform disaster response tasks with degraded communications. All competition teams were responsible for developing their own interaction method to control their robot. Of the 23 teams in the competition, 20 consented to participate in this study of human–robot interaction (HRI). The evaluation team observed the consenting teams during task execution in their control rooms (with the operators), and all 23 teams were observed on the field during the public event (with the robot). A variety of data were collected both before the competition and on-site. Each participating team’s interaction methods were distilled into a set of characteristics pertaining to the robot, operator strategies, control methods, and sensor fusion. Each task was decomposed into subtasks that were classified according to the complexity of the mobility and/or manipulation actions being performed. Performance metrics were calculated regarding the number of task attempts, performance time, and critical incidents, which were then correlated to each team’s interaction methods. The results of this analysis suggest that a combination of HRI characteristics, including balancing the capabilities of the operator with those of the robot and multiple sensor fusion instances with variable reference frames, positively impacted task performance. A set of guidelines for designing HRI with remote, semi-autonomous humanoid robots is proposed based on these results.
The quality of life of people with special needs, such as residents of healthcare facilities, may be improved through operating social telepresence robots that provide the ability to participate in remote activities with friends or family. However, to date, such platforms do not exist for this population.Methodology: Our research utilized an iterative, bottomup, user-centered approach, drawing upon our assistive robotics experiences. Based on the findings of our formative user studies, we developed an augmented reality user interface for our social telepresence robot. Our user interface focuses primarily on the human-human interaction and communication through video, providing support for semi-autonomous navigation. We conducted a case study (n=4) with our target population in which the robot was used to visit a remote art gallery.Results: All of the participants were able to operate the robot to explore the gallery, form opinions about the exhibits, and engage in conversation.Significance: This case study demonstrates that people from our target population can successfully engage in the active role of operating a telepresence robot.
Purpose – The authors believe that people with cognitive and motor impairments may benefit from using of telepresence robots to engage in social activities. To date, these systems have not been designed for use by people with disabilities as the robot operators. The paper aims to discuss these issues. Design/methodology/approach – The authors conducted two formative evaluations using a participatory action design process. First, the authors conducted a focus group (n=5) to investigate how members of the target audience would want to direct a telepresence robot in a remote environment using speech. The authors then conducted a follow-on experiment in which participants (n=12) used a telepresence robot or directed a human in a scavenger hunt task. Findings – The authors collected a corpus of 312 utterances (first hand as opposed to speculative) relating to spatial navigation. Overall, the analysis of the corpus supported several speculations put forth during the focus group. Further, it showed few statistically significant differences between speech used in the human and robot agent conditions; thus, the authors believe that, for the task of directing a telepresence robot's movements in a remote environment, people will speak to the robot in a manner similar to speaking to another person. Practical implications – Based upon the two formative evaluations, the authors present four guidelines for designing speech-based interfaces for telepresence robots. Originality/value – Robot systems designed for general use do not typically consider people with disabilities. The work is a first step towards having our target population take the active role of the telepresence robot operator.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.