ABSTRACT:The aim of this contribution is to show the results of evaluations on 3D digitizations performed using different methodologies and technologies. In particular, for surveys conducted at the architectural and urban scale, the recent reduction of costs related to Time of Flight and phase shift laser scanners is actually enhancing the replacement of traditional topographic instruments (i.e. total stations) with range-based technologies for the acquisition of 3D data related to built heritage. If compared to surveys performed using traditional topographic technologies, range-based ones offer a wide range of advantages, but they also require different skills, procedures and times. The present contribution shows the results of a practical application of both approaches on the same case study. Another application was suggested by the recent developments in the photogrammetric field that enhance the improvement of software able to automatically orient uncalibrated cameras and derive dense and accurate 3D point clouds, with evident benefits in reduction of costs required for survey equipment. Therefore, the presented case study constituted the occasion to compare a rangebased survey with a fast 3D acquisition and modelling using a Structure from Motion solution. These survey procedures were adopted at an architectural scale, on a single building, that was surveyed both on the outside and on the inside. Assessments on the quality of the rebuilt information is reported, as far as metric accuracy and reliability is concerned, as well as on time consuming and on skills required during each step of the adopted pipelines. For all approaches, these analysis highlighted advantages and disadvantages that allow to conduct evaluations on the possible convenience of adopting range-based technologies instead of a traditional topographic approach or a photogrammetric one instead of a range based one in case of surveys conducted at an architectural/urban scale.
Abstract. In this paper we describe a mobile camera localization system that is able to accurately estimate the pose of an hand-held camera inside a known urban environment. The work leverages on a precomputed 3D structure obtained by a hierarchical Structure from Motion pipeline to compute the 2D-3D correspondences needed to orient the camera. The hierarchical cluster structure, given by the SfM, guides the localization process providing accurate and reliable features matching. Experiments in outdoor challenging environments demonstrate the effectiveness of the method compared to a standard image retrieval approach.
In this paper we propose a complete system that is able to accurately localize a mobile agent wearing a camera inside a known environment. The work leverages on a precomputed 3D structure to obtain 2D-3D correspondences and then orients the camera. Experiments in a challenging environment with a handmade ground-truth demonstrate sufficient accuracy to support the target application on real scenarios. System overviewOur system leverages on a structure-and-motion pipeline, called SAMANTHA [4], that produces a sparse set of 3D points endowed with appearance descriptors (the "model") by processing a unordered set of images of the scene (the "images archive").Localization or orientation of the camera occurs via a linear algorithm that requires a set of 2D-3D point correspondences between the current frame and the model. Since typically the 2D points visible in one image are a small subset of the whole reconstruction, it is highly advisable to deploy pruning strategies to limit the set of 3D candidates. Our technique is based on retrieving the most similar images to the current frame from the archive and then limiting the candidates to those points that are visible in the retrieved images. Retrieval follows a standard Bag-of-words (BoW) approach with tf-idf weighting [6].The system involves two main stages (see Fig.1):• an "offline" stage that runs SAMANTHA and indexes images according to the BoW approach.• an "online" stage during which the video stream captured from the mobile camera is transmitted over Wi-Fi connection to a server that processes each frame in order to orient the camera, thereby localizing the mobile agent wearing it.In particular, the online stage consists of the following steps, as illustrated in Fig.1: This work has been funded by the EU Project SAMURAI. ExperimentsWe run our test on a challenging outdoor environment consisting of a parking space located in between several buildings with repetitive structures. We recorded a video sequence with a proprietary device specifically designed within the EU project SAMURAI.To build the 3D model, 678 images (with resolution 2048 × 1536) of the whole scene has been taken with a consumer camera, sampling almost all the area every five meters. Four static calibrated cameras are located on the 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.