Robust remote teleoperation of high-DOF manipulators is of critical importance across a wide range of robotics applications. Contemporary robot manipulation interfaces primarily utilize a free-positioning pose specification approach to independently control each axis of translation and orientation in free space. In this work, we present two novel interfaces, constrained positioning and point-and-click, which incorporate scene information, including points-of-interest and local surface geometry, into the grasp specification process. We also present results of a user study evaluation comparing the effects of increased use of scene information in grasp pose specification algorithms for general object manipulation. The results of our study show that constrained positioning and point-and-click significantly outperform the widely used free positioning approach by significantly reducing the number of grasping errors and the number of user interactions required to specify poses. Furthermore, the point-and-click interface significantly increased the number of tasks users were able to complete.
Human-Robot Interaction (HRI) is a rapidly expanding field of study that focuses on allowing nonroboticist users to naturally and effectively interact with robots. The importance of conducting extensive user studies has become a fundamental component of HRI research; however, due to the nature of robotics research, such studies often become expensive, time consuming, and limited to constrained demographics. In this work, we present the Robot Management System (RMS), a novel framework for bringing robotic experiments to the web. We present a detailed description of our open-source system and describe an initial trial of the RMS as a means of conducting user studies. Using a series of navigation and manipulation tasks with a PR2 robot, we compare three user study conditions: users that are co-present with the robot, users that are recruited to the university laboratory but control the robot from a different room, and remote web-based users. Our findings show little statistical differences between usability patterns across these groups, validating the use of web-based crowdsourcing techniques for certain types of HRI evaluations.
In order for robots to be useful in real world learning scenarios, non-expert human teachers must be able to interact with and teach robots in an intuitive manner. One essential robot capability is wide-area (mobile or nonstationary) pick-and-place tasks. Even in its simplest form, pick-and-place is a hard problem due to uncertainty arising from noisy input demonstrations and non-deterministic real world environments. This work introduces a novel method for goal-based learning from demonstration where we learn over a large corpus of human demonstrated ground truths of placement locations in an unsupervised manner via Gaussian Mixture Models. The goal is to provide a multi-hypothesis solution for a given task description which can later be utilized in the execution of the task itself. In addition to learning the actual arrangements of the items in question, we also autonomously extract which frames of reference are important in each demonstration. We further verify these findings in a subsequent evaluation and execution via a mobile manipulator.
Robust remote teleoperation of high-degree-of-freedom manipulators is of critical importance across a wide range of robotics applications. Contemporary robot manipulation interfaces primarily utilize a free positioning pose specification approach to independently control each degree of freedom in free space. In this work, we present two novel interfaces, constrained positioning and point-and-click. Both novel approaches incorporate scene information from depth data into the grasp pose specification process, effectively reducing the number of 3D transformations the user must input. The novel interactions are designed for 2D image streams, rather than traditional 3D virtual scenes, further reducing mental transformations by eliminating the controllable camera viewpoint in favor of fixed physical camera viewpoints. We present interface implementations of our novel approaches, as well as free positioning, in both 2D and 3D visualization modes. In addition, we present results of a 90-participant user study evaluation comparing the effectiveness of each approach for a set of general object manipulation tasks, and the effects of implementing each approach in 2D image views versus 3D depth views. The results of our study show that point-and-click outperforms both free positioning and constrained positioning by significantly increasing the number of tasks completed and significantly reducing task failures and grasping errors, while significantly reducing the number of user interactions required to specify poses. In addition, we found that regardless of the interaction approach, the 2D visualization mode resulted in significantly better performance than the 3D visualization mode, with statistically significant reductions in task failures, grasping errors, task completion time, number of interactions, and user workload, all while reducing bandwidth requirements imposed by streaming depth data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.