Abstract-We consider the problem of team-based robot mapping and localization using wireless signals broadcast from access points embedded in today's urban environments. We map and localize in an unknown environment, where the access points' locations are unspecified and for which training data is a priori unavailable. Our approach is based on an heterogeneous method combining robots with different sensor payloads. The algorithmic design assumes the ability of producing a map in real-time from a sensor-full robot that can quickly be shared by sensor-deprived robot team members. More specifically, we cast WiFi localization as classification and regression problems that we subsequently solve using machine learning techniques. In order to produce a robust system, we take advantage of the spatial and temporal information inherent in robot motion by running Monte Carlo Localization on top of our regression algorithm, greatly improving its effectiveness. A significant amount of experiments are performed and presented to prove the accuracy, effectiveness, and practicality of the algorithm. I. INTRODUCTIONAs a result of the evident necessity for robots to localize and map unknown environments, a tremendous amount of research has focused on implementing these primordial abilities. Localization problems have been extensively studied and a variety of solutions have been proposed, each assuming different sensors, robotic platforms, and scenarios. The increasingly popular trend of employing low-cost multi-robot teams [14], as opposed to a single expensive robot, provides additional constraints and challenges that have received less attention. A tradeoff naturally arises, because reducing the number of sensors will effectively decrease the robots' price while making the localization problem more challenging. We anticipate that team-based robots will require WiFi technology to exchange information between each other. We also foresee robots will continue to supply rough estimations of local movements, via odometry or similar inexpensive low accuracy sensors. These team-based robots have the advantage of being very affordable. It is clear, however, that these robots would not be practical in unknown environments due to their lack of perception abilities and, as such, we embrace an heterogeneous setup pairing a lot of these simple robots with a single robot capable of mapping an environment by traditional means (e.g., SLAM using a laser range finder or other sophisticated proximity sensors). Within this scenario, our goal is to produce a map of an unknown environment in real-time using the more capable robot, so that the less sophisticated robots can localize themselves.Given the sensory constraints imposed on the robots, we exploit wireless signals from Access Points (APs) that have
We present a system that enables multiple heterogenous mobile robots to build and share an appearance based map appropriate for indoor navigation using exclusively monocular vision. Robots incrementally create online an appearance based model based on SIFT descriptors. The spatial model is enriched with additional information so that the map can be used for navigation also by robots different from those that built it. Once the map is available, navigation is performed using an approach based on epipolar geometry. The control mechanism builds upon the unicycle kinematic model, and assumes robots are equipped with a servoed camera. The validity of the proposed approach is substantiated both in simulation and on an heterogeneous multirobot system. I. MOTIVATION AND CONTRIBUTIONThis paper presents our first steps towards the implementation of a an heterogeneous multi-robot system operating in indoor environment relying only on visual sensors. We show how a team of heterogenous robots can build and take advantage of a spatial model for an unknown environment based exclusively on images taken from monocular cameras. The model is then used to localize and safely navigate to a target location specified as a desired robot view. Notably, and differently from most formerly developed similar approaches, the map is built incrementally and does not require a preliminary data acquisition stage followed by an off-line lengthy map generation process. Our eventual goal is to equip these robots with mapping and navigation abilities comparable to those displayed by more sophisticated systems using laser range finders. While obviously the spatial model will be different, we strive to reach the same level of autonomy and safety in navigation. We stick to the use of monocular images because monocular cameras are cheap and represent a ready to use tool to exchange high-level information between hand-held devices and robot systems. Therefore, this appears to be a natural way to exchange information between users and robots, or to specify interesting locations for the robot to go. Our work builds upon different contributions made in the past in the fields of visual servoing, mapping, and computer vision, and achieves a new level of competence, namely heterogeneous visual based navigation. The system described in this paper builds from scratch an appearance based map capturing salient visual features detected in the environment explored by the robot. Features inserted into the map are not tied to a specific robot morphology, but are, so to speak, disembodied, inasmuch as they can be interpreted and reused also by robots with a morphology different from G. Erinc and S. Carpin are with the School of Engineering, University of California, Merced (USA). the the one that produced the map. The map built can then be used to localize a robot and also for navigation towards a desired target image. In Section II we shortly describe related literature in the field of spatial modeling using vision. Next, in Section III we present a method that allows ...
Appearance based maps are emerging as an important class of spatial representations for mobile robots. In this paper we tackle the problem of merging together two or more appearance based maps independently built by robots operating in the same environment. Noticing the lack of well accepted metrics to measure the performance of map merging algorithms, we propose to use algebraic connectivity as a metric to assess the advantage gained by merging multiple maps. Next, based on this criterion, we propose an anytime algorithm aiming to quickly identify the more advantageous parts to merge. The system we proposed has been fully implemented and tested in indoor scenarios and shows that our algorithm achieves a convenient tradeoff between accuracy and speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.