In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m2. To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions.
Abstract-In this paper we present a novel approach to quickly obtain detailed 3D reconstructions of large scale environments. The method is based on the consecutive registration of 3D point clouds generated by modern lidar scanners such as the Velodyne HDL-32e or HDL-64e. The main contribution of this work is that the proposed system specifically deals with the problem of sparsity and inhomogeneity of the point clouds typically produced by these scanners. More specifically, we combine the simplicity of the traditional iterative closest point (ICP) algorithm with the analysis of the underlying surface of each point in a local neighbourhood. The algorithm was evaluated on our own collected dataset captured with accurate ground truth. The experiments demonstrate that the system is producing highly detailed 3D maps at the speed of 10 sensor frames per second.
We analyze if the presence of corners in Very High Resolution (VHR) satellite images can give us an indication on the type of structure present in a scene (man-made versus natural structures). Two the corner detectors are validated in this respect: Harris and SUSAN. The detection performance is evaluated over a spectrum of spatial resolutions for current and future VHR systems (from 2 meters to 17 centimeters). The ground truth of this study consists of annotated image extracts containing different types of man-made structures, in which the relevant corners have been identified.
Many emerging applications in the field of assisted and autonomous driving rely on accurate position information. Satellite-based positioning is not always sufficiently reliable and accurate for these tasks. Visual odometry can provide a solution to some of these shortcomings. Current systems mainly focus on the use of stereo cameras, which are impractical for large-scale application in consumer vehicles due to their reliance on accurate calibration. Existing monocular solutions on the other hand have significantly lower accuracy. In this paper, we present a novel monocular visual odometry method based on the robust tracking of features in the ground plane. The key concepts behind the method are the modeling of the uncertainty associated with the inverse perspective projection of image features and a parameter space voting scheme to find a consensus on the vehicle state among tracked features. Our approach differs from traditional visual odometry methods by applying 2D scene and motion constraints at the lowest level instead of solving for the 3D pose change. Evaluation both on the public KITTI benchmark and our own dataset show that this is a viable approach for visual odometry which outperforms basic 3D pose estimation due to the exploitation of the largely planar structure of road environments.
Abstract-In this paper, we derive a novel iterative closest point (ICP) technique that performs point cloud alignment in a robust and consistent way. Traditional ICP techniques minimize the point-to-point distances, which are successful when point clouds contain no noise or clutter and moreover are dense and more or less uniformly sampled. In the other case, it is better to employ point-to-plane or other metrics to locally approximate the surface of the objects. However, the point-to-plane metric does not yield a symmetric solution, i.e. the estimated transformation of point cloud p to point cloud q is not necessarily equal to the inverse transformation of point cloud q to point cloud p. In order to improve ICP, we will enforce such symmetry constraints as prior knowledge and make it also robust to noise and clutter. Experimental results show that our method is indeed much more consistent and accurate in presence of noise and clutter compared to existing ICP algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.