Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain highresolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition. In this paper, we present and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. We also propose a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark. Further, we evaluate our system on real outdoor scenes, and show that our interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.
The quality and completeness of 3D models obtained by Structure-from-Motion (SfM) heavily depend on the image acquisition process. If the user gets feedback about the reconstruction quality already during the acquisition, he can optimize this process. We propose an online SfM approach that allows the inspection of the current reconstruction result on site. To guide the user throughout the acquisition, we visualize the current Ground Sampling Distance (GSD) and image redundancy as quality indicators on the surface model. The contributions of this paper are an online SfM framework for highresolution still images that achieves an accuracy close to an off-line SfM method and a visualization of quality measures that allow the user to optimize the image acquisition process. We compare the accuracy of the proposed online SfM to state-of-the-art batch-based SfM methods and demonstrate how our algorithm improves the acquisition process.
Extracting surfaces from a sparse 3D point cloud in real-time can be beneficial for many applications that are based on Simultaneous Localization and Mapping (SLAM) like occlusion handling or path planning. However, this is a complex task since the sparse point cloud is noisy, irregularly sampled and growing over time. In this paper, we propose a new method based on an optimal labeling of an incrementally reconstructed tetrahedralized point cloud. We propose a new sub-modular energy function that extracts the surfaces with the same accuracy as state-of-the-art with reduced computation time. Furthermore, our energy function can be easily adapted to additional 3D points and incrementally minimized using the dynamic graph cut in an efficient manner. In such a way, we are able to integrate several hundreds of 3D points per second while being largely independent from the overall scene size and therefore our novel method is suited for real-time SLAM applications.
Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of an area of interest. In that context automatic flight path planning and autonomous flying is often applied but so far cannot fully replace the human in the loop, supervising the flight on-site to assure that there are no collisions with obstacles. Unfortunately, this workflow yields several issues, such as the need to mentally transfer the aerial vehicles position between 2D map positions and the physical environment, and the complicated depth perception of objects flying in the distance. Augmented Reality can address these issues by bringing the flight planning process on-site and visualizing the spatial relationship between the planned or current positions of the vehicle and the physical environment. In this paper, we present Augmented Reality supported navigation and flight planning of micro aerial vehicles by augmenting the users view with relevant information for flight planning and live feedback for flight supervision. Furthermore, we introduce additional depth hints supporting the user in understanding the spatial relationship of virtual waypoints in the physical world and investigate the effect of these visualization techniques on the spatial understanding.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.