Building upon previous work that demonstrates the effectiveness of WiFi localization information per se, in this paper we contribute a mobile robot that autonomously navigates in indoor environments using WiFi sensory data. We model the world as a WiFi signature map with geometric constraints and introduce a continuous perceptual model of the environment generated from the discrete graph-based WiFi signal strength sampling. We contribute our WiFi localization algorithm which continuously uses the perceptual model to update the robot location in conjunction with its odometry data. We then briefly introduce a navigation approach that robustly uses the WiFi location estimates. We present the results of our exhaustive tests of the WiFi localization independently and in conjunction with the navigation of our custom-built mobile robot in extensive long autonomous runs. 1 Thanks to Mike Licitra, who designed and built the robot.
Abstract-The sheer volume of data generated by depth cameras provides a challenge to process in real time, in particular when used for indoor mobile robot localization and navigation. We introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by sampling points from the depth image, and classifying local grouped sets of points as belonging to planes in 3D (the "plane filtered" points) or points that do not correspond to planes within a specified error margin (the "outlier" points). We then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D, and assigns correspondences for each point to lines in the 2D map. The full sampled point cloud (consisting of both plane filtered as well as outlier points) is processed for obstacle avoidance for autonomous navigation. All our algorithms process only the depth information, and do not require additional RGB data. The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates (30Hz) with low CPU requirements (16%). We provide experimental results demonstrating the effectiveness of our approach for indoor mobile robot localization and navigation. We further compare the accuracy and robustness in localization using depth cameras with FSPF vs. alternative approaches that simulate laser rangefinder scans from the 3D data.
Abstract-For the last three years, we have developed and researched multiple collaborative robots, CoBots, which have been autonomously traversing our multi-floor buildings. We pursue the goal of long-term autonomy for indoor service mobile robots as the ability for them to be deployed indefinitely while they perform tasks in an evolving environment. The CoBots include several levels of autonomy, and in this paper we focus on their localization and navigation algorithms. We present the Corrective Gradient Refinement (CGR) algorithm, which refines the proposal distribution of the particle filter used for localization with sensor observations using analytically computed state space derivatives on a vector map. We also present the Fast Sampling Plane Filtering (FSPF) algorithm that extracts planar regions from depth images in real time. These planar regions are then projected onto the 2D vector map of the building, and along with the laser rangefinder observations, used with CGR for localization. For navigation, we present a hierarchical planner, which computes a topological policy using a graph representation of the environment, computes motion commands based on the topological policy, and then modifies the motion commands to side-step perceived obstacles. The continuous deployments of the CoBots over the course of one and a half years have provided us with logs of the CoBots traversing more than 130km over 1082 deployments, which we publish as a dataset consisting of more than 10 million laser scans. The logs show that although there have been continuous changes in the environment, the robots are robust to most of them, and there exist only a few locations where changes in the environment cause increased uncertainty in localization.
In this video we briefly illustrate the progress and contributions made with our mobile, indoor, service robots CoBots (Collaborative Robots), since their creation in 2009. Many researchers, present authors included, aim for autonomous mobile robots that robustly perform service tasks for humans in our indoor environments. The efforts towards this goal have been numerous and successful, and we build upon them. However, there are clearly many research challenges remaining until we can experience intelligent mobile robots that are fully functional and capable in our human environments.Our research and continuous indoor deployment of the CoBot robots in multi-floor office-style buildings provides multiple contributions, including: robust real-time autonomous localization [1], based on WIFI data [2], and on depth camera information [3]; symbiotic autonomy in which the deployed robots can overcome their perceptual, cognitive, and actuation limitations by proactively asking for help from humans [4], [5], and, in ongoing experiments, from the web [6], [7], and from other robots [8], [9]; human-centered planning in which models of humans are explicitly used in robot task and path planning [10]; semiautonomous telepresence enabling the combination of rich remote visual and motion control with autonomous robot localization and navigation [11]; web-based user task selection and information interfaces [12]; and creative multi-robot task scheduling and execution [12]. Furthermore, we have developed a 3D simulation of the multi-floor, multi-person environment which will allow extensive learning experiments to provide approximate initial models and parameters to be refined with the real robots' experiences. Finally, our robot platform is extremely effective, in particular with its stable low-clearance, omnidirectional base. The CoBot robots were designed and built by Michael Licitra, (mlicitra@cmu.edu), and the base is a scaled-up version of the CMDragons small-size soccer robots [13], also designed and built by Licitra. Remarkably, the robots have operated over 200km for more than three years without any hardware failures, and with minimal maintenance. Our robots purposefully include a modest variety of sensing and computing devices, including the Microsoft Kinect depth-camera, vision cameras for telepresence and interaction, a small Hokuyo LIDAR for obstacle avoidance and localization comparison studies (no longer present in the most recent CoBot-4), a touch-screen and speech-enabled tablet, microphones and speakers, as well as wireless signal access and processing.The CoBot robots can perform multiple classes of tasks:• A single destination task, in which the user asks the robot to go to a specific location-the Go-To-Room task-and, in addition, to deliver a specified spoken message-the Deliver-Message task; • An item transport task, in which the user requests the robot to retrieve an item at a specified location, and to deliver it to a destination location: this Transport task also acts as the task to accompany a person bet...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.