This paper proposes an alternative approach to common teleoperation methods found in search and rescue (SAR) robots. Using a head mounted display (HMD) the operator is capable of perceiving rectified images of the robot world in 3-D, as transmitted by a pair of stereo cameras onboard the robot. The HMD is also equipped with an integrated head-tracker, which permits controlling the robot motion in such a way that the cameras follow the operator's head movements, thus providing an immersive sensation to him. We claim that this approach is a more intuitive and less error prone teleoperation of the robot. The proposed system was evaluated by a group of subjects, and the results suggest that it may yield significant benefits to the effectiveness of the SAR mission. In particular, the user's depth perception and situational awareness improved significantly when using the HMD, and their performance during a simulated SAR operation was also enhanced, both in terms of operation time and on successful identification of objects of interest.
In this video we briefly illustrate the progress and contributions made with our mobile, indoor, service robots CoBots (Collaborative Robots), since their creation in 2009. Many researchers, present authors included, aim for autonomous mobile robots that robustly perform service tasks for humans in our indoor environments. The efforts towards this goal have been numerous and successful, and we build upon them. However, there are clearly many research challenges remaining until we can experience intelligent mobile robots that are fully functional and capable in our human environments.Our research and continuous indoor deployment of the CoBot robots in multi-floor office-style buildings provides multiple contributions, including: robust real-time autonomous localization [1], based on WIFI data [2], and on depth camera information [3]; symbiotic autonomy in which the deployed robots can overcome their perceptual, cognitive, and actuation limitations by proactively asking for help from humans [4], [5], and, in ongoing experiments, from the web [6], [7], and from other robots [8], [9]; human-centered planning in which models of humans are explicitly used in robot task and path planning [10]; semiautonomous telepresence enabling the combination of rich remote visual and motion control with autonomous robot localization and navigation [11]; web-based user task selection and information interfaces [12]; and creative multi-robot task scheduling and execution [12]. Furthermore, we have developed a 3D simulation of the multi-floor, multi-person environment which will allow extensive learning experiments to provide approximate initial models and parameters to be refined with the real robots' experiences. Finally, our robot platform is extremely effective, in particular with its stable low-clearance, omnidirectional base. The CoBot robots were designed and built by Michael Licitra, (mlicitra@cmu.edu), and the base is a scaled-up version of the CMDragons small-size soccer robots [13], also designed and built by Licitra. Remarkably, the robots have operated over 200km for more than three years without any hardware failures, and with minimal maintenance. Our robots purposefully include a modest variety of sensing and computing devices, including the Microsoft Kinect depth-camera, vision cameras for telepresence and interaction, a small Hokuyo LIDAR for obstacle avoidance and localization comparison studies (no longer present in the most recent CoBot-4), a touch-screen and speech-enabled tablet, microphones and speakers, as well as wireless signal access and processing.The CoBot robots can perform multiple classes of tasks:• A single destination task, in which the user asks the robot to go to a specific location-the Go-To-Room task-and, in addition, to deliver a specified spoken message-the Deliver-Message task; • An item transport task, in which the user requests the robot to retrieve an item at a specified location, and to deliver it to a destination location: this Transport task also acts as the task to accompany a person bet...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.