This paper presents two techniques to detect and classify navigable terrain in complex 3D environments. The first method is a low level on-line mechanism aimed at detecting obstacles and holes at a fast frame rate using a time-of-flight camera as the main sensor. The second technique is a high-level off-line classification mechanism that learns traversable regions from larger 3D point clouds acquired with a laser range scanner. We approach the problem using Gaussian Processes as a regression tool, in which the terrain parameters are learned, and also for classification, using samples from traversed areas to build the traversable terrain class. The two methods are compared against unsupervised classification, and sample trajectories are generated in the classified areas using a non-holonomic path planner. We show results of both the low-level and the high-level terrain classification approaches in simulations and in real-time navigation experiments using a Segway RMP400 robot.
Abstract-We present an approach to the problem of 3D map building in urban settings for service robots, using threedimensional laser range scans as the main data input. Our system is based on the probabilistic alignment of 3D point clouds employing a delayed-state information-form SLAM algorithm, for which we can add observations of relative robot displacements efficiently. These observations come from the alignment of dense range data point clouds computed with a variant of the iterative closest point algorithm. The datasets were acquired with our custom built 3D range scanner integrated into a mobile robot platform. Our mapping results are compared to a GISbased CAD model of the experimental site. The results show that our approach to 3D mapping performs with sufficient accuracy to derive traversability maps that allow our service robots navigate and accomplish their assigned tasks on a urban pedestrian area.
Abstract-Outdoor camera networks are becoming ubiquitous in critical urban areas of large cities around the world. Although current applications of camera networks are mostly limited to video surveillance, recent research projects are exploiting advances on outdoor robotics technology to develop systems that put together networks of cameras and mobile robots in people assisting tasks. Such systems require the creation of robot navigation systems in urban areas with a precise calibration of the distributed camera network. Despite camera calibration has been an extensively studied topic, the calibration (intrinsic and extrinsic) of large outdoor camera networks with no overlapping view fields, and likely to suffer frequent recalibration, poses novel challenges in the development of practical methods for user-assisted calibration that minimize intervention times and maximize precision. In this paper we propose the utilization of Laser Range Finder (LRF) data covering the area of the camera network to support the calibration process and develop a semi-automated methodology allowing quick and precise calibration of large camera networks. The proposed methods have been tested in a real urban environment and have been applied to create direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms.
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.