A handover is a complex collaboration, where actors coordinate in time and space to transfer control of an object. This coordination comprises two processes: the physical process of moving to get close enough to transfer the object, and the cognitive process of exchanging information to guide the transfer. Despite this complexity, we humans are capable of performing handovers seamlessly in a wide variety of situations, even when unexpected. This suggests a common procedure that guides all handover interactions. Our goal is to codify that procedure. To that end, we first study how people hand objects to each other in order to understand their coordination process and the signals and cues that they use and observe with their partners. Based on these studies, we propose a coordination structure for human-robot handovers that considers the physical and social-cognitive aspects of the interaction separately. This handover structure describes how people approach, reach out their hands, and transfer objects while simultaneously coordinating the what, when, and where of handovers: to agree that the handover will happen (and with what object), to establish the timing of the handover, and to decide the configuration at which the handover will occur. We experimentally evaluate human-robot handover behaviors that exploit this structure, and offer design implications for seamless human-robot handover interactions.
Abstract-We present the hardware design, software architecture, and core algorithms of HERB 2.0, a bimanual mobile manipulator developed at the Personal Robotics Lab at Carnegie Mellon University. We have developed HERB 2.0 to perform useful tasks for and with people in human environments. We exploit two key paradigms in human environments: that they have structure that a robot can learn, adapt and exploit, and that they demand general-purpose capability in robotic systems. In this paper, we reveal some of the structure present in everyday environments that we have been able to harness for manipulation and interaction, comment on the particular challenges on working in human spaces, and describe some of the lessons we learned from extensively testing our integrated platform in kitchen and office environments.
We have developed the CHIMP (CMU Highly Intelligent Mobile Platform) robot as a platform for executing complex tasks in dangerous, degraded, human‐engineered environments. CHIMP has a near‐human form factor, work‐envelope, strength, and dexterity to work effectively in these environments. It avoids the need for complex control by maintaining static rather than dynamic stability. Utilizing various sensors embedded in the robot's head, CHIMP generates full three‐dimensional representations of its environment and transmits these models to a human operator to achieve latency‐free situational awareness. This awareness is used to visualize the robot within its environment and preview candidate free‐space motions. Operators using CHIMP are able to select between task, workspace, and joint space control modes to trade between speed and generality. Thus, they are able to perform remote tasks quickly, confidently, and reliably, due to the overall design of the robot and software. CHIMP's hardware was designed, built, and tested over 15 months leading up to the DARPA Robotics Challenge. The software was developed in parallel using surrogate hardware and simulation tools. Over a six‐week span prior to the DRC Trials, the software was ported to the robot, the system was debugged, and the tasks were practiced continuously. Given the aggressive schedule leading to the DRC Trials, development of CHIMP focused primarily on manipulation tasks. Nonetheless, our team finished 3rd out of 16. With an upcoming year to develop new software for CHIMP, we look forward to improving the robot's capability and increasing its speed to compete in the DRC Finals.
Abstract-When performing physical collaboration tasks, like packing a picnic basket together, humans communicate strongly and often subtly via multiple channels like gaze, speech, gestures, movement and posture. Understanding and participating in this communication enables us to predict a physical action rather than react to it, producing seamless collaboration. In this paper, we automatically learn key discriminative features that predict the intent to handover an object using machine learning techniques. We train and test our algorithm on multichannel vision and pose data collected from an extensive user study in an instrumented kitchen. Our algorithm outputs a tree of possibilities, automatically encoding various types of prehandover communication. A surprising outcome is that mutual gaze and inter-personal distance, often cited as being key for interaction, were not key discriminative features. Finally, we discuss the immediate and future impact of this work for human-robot interaction.
This study demonstrates the effectiveness of using a multi-functional miniature in vivo robot platform to perform LESS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.