Abstract-Image based localization is an important problem with many applications. In our previous work, we presented a two step pipeline for performing image based localization of mobile devices in outdoor environments. In the first step, a query image is matched against a georeferenced 3D image database to retrieve the "closest" image. In the second step, the pose of the query image is recovered with respect to the "closest" image using cell phone sensors. As such, a key ingredient of our outdoor image based localization is a 3D georeferenced image database. In this paper, we extend this approach to indoors by utilizing a 3D locally referenced image database generated by an ambulatory depth acquisition backpack that is originally developed for 3D modeling of indoor environments. We demonstrate retrieval rate of 94% over a set of 83 query images taken in an indoor shopping center and characterize pose recovery accuracy of the same set.
Automated 3D modeling of building interiors is useful in applications such as virtual reality and environment mapping. We have developed a human operated backpack data acquisition system equipped with a variety of sensors such as cameras, laser scanners, and orientation measurement sensors to generate 3D models of building interiors, including uneven surfaces and stairwells. An important intermediate step in any 3D modeling system, including ours, is accurate 6 degrees of freedom localization over time. In this paper, we propose two approaches to improve localization accuracy over our previously proposed methods. First, we develop an adaptive localization algorithm which takes advantage of the environment's floor planarity whenever possible. Secondly, we show that by including all the loop closures resulting from two cameras facing away from each other, it is possible to reduce localization error in scenarios where parts of the acquisition path is retraced. We experimentally characterize the performance gains due to both schemes.
Indoor localization and mapping is an important problem with many applications such as emergency response, architectural modeling, and historical preservation. In this paper, we develop an automatic, off-line pipeline for metrically accurate, GPS-denied, indoor 3D mobile mapping using a human-mounted backpack system consisting of a variety of sensors. There are three novel contributions in our proposed mapping approach. First, we present an algorithm which automatically detects loop closure constraints from an occupancy grid map. In doing so, we ensure that constraints are detected only in locations that are well conditioned for scan matching. Secondly, we address the problem of scan matching with poor initial condition by presenting an outlier-resistant, genetic scan matching algorithm that accurately matches scans despite a poor initial condition. Third, we present two metrics based on the amount and complexity of overlapping geometry in order to vet the estimated loop closure constraints. By doing so, we automatically prevent erroneous loop closures from degrading the accuracy of the reconstructed trajectory. The proposed algorithms are experimentally verified using both controlled and real-world data. The end-to-end system performance is evaluated using 100 surveyed control points in an office environment and obtains a mean accuracy of 10 cm. Experimental results are also shown on three additional datasets from real world environments including a 1500 meter trajectory in a warehouse sized retail shopping center.
Abstract-Image-based positioning has important commercial applications such as augmented reality and customer analytics. In our previous work, we presented a two step pipeline for performing image based positioning of mobile devices in outdoor environments. In this chapter, we modify and extend the pipeline to work for indoor positioning. In the first step, we generate a sparse 2.5D georeferenced image database using an ambulatory backpack-mounted system originally developed for 3D modeling of indoor environments. In the second step, a query image is matched against the image database to retrieve the best-matching database image. In the final step, the pose of the query image is recovered with respect to the best-matching image. Since the pose recovery in step three only requires depth information at certain SIFT feature keypoints in the database image, we only require sparse depthmaps that indicate the depth values at these keypoints. Our experimental results in a shopping mall indicate that our pipeline is capable of achieving sub-meter image-based indoor positioning accuracy.
Image-based localization has important commercial applications such as augmented reality and customer analytics. In prior work, we developed a three step pipeline for image-based localization of mobile devices in indoor environments. In the first step, we generate a 2.5D georeferenced image database using an ambulatory backpack-mounted system originally developed for 3D modeling of indoor environments. Specifically, we first create a dense 3D point cloud and polygonal model from the side laser scanner measurements of the backpack, and then use it to generate dense 2.5D database image depthmaps by raytracing the 3D model. In the second step, a query image is matched against the image database to retrieve the best-matching database image. In the final step, the pose of the query image is recovered with respect to the best-matching image. Since the pose recovery in step three only requires sparse depth information at certain SIFT feature keypoints in the database image, in this paper we improve upon our previous method by only calculating depth values at these keypoints, thereby reducing the required number of sensors in our data acquisition system. To do so, we use a modified version of the classic multi-camera 3D scene reconstruction algorithm, thereby eliminating the need for expensive geometry laser range scanners. Our experimental results in a shopping mall indicate that the proposed reduced complexity sparse depthmap approach is nearly as accurate as our previous dense depth map method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.