This work presents a system for the generation of a free-form surface model from video sequences. Although any single centered camera can be applied in the proposed system the approach is demonstrated using fisheye lenses because of their good properties for tracking. The system is designed to function automatically and to be flexible with respect to size and shape of the reconstructed scene. To minimize geometric assumptions a statistic fusion of dense depth maps is utilized. Special attention is payed to the necessary rectification of the spherical images and the resulting iso-disparity surfaces, which can be exploited in the fusion approach. Before dense depth estimation can be performed the cameras' pose parameters are extracted by means of a structure-from-motion (SfM) scheme. In this respect automation of the system is achieved by thorough decision model based on robust statistics and error propagation of projective measurement uncertainties. This leads to a scene-independent set of only a few parameters. All system components are formulated in a general way, making it possible to cope with any single centered projection model, in particular with spherical cameras. In using wide field-of-view cameras the presented system is able to reliably retrieve poses and consistently reconstruct large scenes. A textured triangle mesh constructed on basis of the scene's reconstructed depth, makes the system's results suitable to function as reference models in a GPU driven analysis-by-synthesis framework for real-time tracking.
Abstract. Autonomous underwater vehicles (AUVs) offer unique possibilities for exploring the 12 deep seafloor in high resolution over large areas. We highlight the results from AUV-based 13 multibeam echosounder (MBES) bathymetry / backscatter and digital optical imagery from the 14 DISCOL area acquired during research cruise SO242 in 2015. AUV bathymetry reveals a 15 morphologically complex seafloor with rough terrain in seamount areas and low-relief 16 variations in sedimentary abyssal plains which are covered in Mn-nodules. Backscatter 17 provides valuable information about the seafloor type and particularly about the influence of 18Mn-nodules on the response of the transmitted acoustic signal. Primarily, Mn-nodule 19 abundances were determined by means of automated nodule detection on AUV seafloor 20 imagery and nodule metrics such as nodules m -2 were calculated automatically for each image 21 allowing further spatial analysis within GIS in conjunction with the acoustic data. AUV-based 22 backscatter was clustered using both raw data and corrected backscatter mosaics. 23In total, two unsupervised methods and one machine learning approach were utilized for 24 backscatter classification and Mn-nodule predictive mapping. Bayesian statistical analysis was 25 applied to the raw backscatter values resulting in six acoustic classes. In addition, Iterative Self-26Organizing Data Analysis (ISODATA) clustering was applied to the backscatter mosaic and its 27 statistics (mean, mode, 10 th , and 90 th quantiles) suggesting an optimum of six clusters as well. 28Part of the nodule metrics data was combined with bathymetry, bathymetric derivatives and 29 backscatter statistics for predictive mapping of the Mn-nodule density using a Random Forest 30 classifier. Results indicate that acoustic classes, predictions from Random Forest model and 31image-based nodule metrics show very similar spatial distribution patterns with acoustic 32 classes hence capturing most of the fine-scale Mn-nodule variability. Backscatter classes reflect 33 areas with homogeneous nodule density. A strong influence of mean backscatter, fine scale BPI 34 and concavity of the bathymetry on nodule prediction is seen. These observations imply that 35 nodule densities are generally affected by local micro-bathymetry in a way that is not yet fully 36 understood. However, it can be concluded that the spatial occurrence of Mn-covered areas can 37 be sufficiently analysed by means of acoustic classification and multivariate predictive 38 mapping allowing to determine the spatial nodule density in a much more robust way than 39 previously possible. 40Biogeosciences Discuss., https://doi.org/10.5194/bg-2018-60 Manuscript under review for journal Biogeosciences Discussion started: 15 February 2018 c Author(s) 2018. CC BY 4.0 License. 411. Introduction 42 Mn-nodules exploration 43 44Research on Mn-nodules received increased attention in the last decade due to increasing 45 prices for ores rich in Cu, Ni or Co, i.e. metal resources that are contained in Mn-nodule...
We propose a marker-less model-based camera tracking approach, which makes use of GPU-assisted analysis-by-synthesis methods on a very wide field of view (e.g. fish-eye) camera. After an initial registration based on a learned database of robust features, the synthesis part of the tracking is performed on graphics hardware, which simulates internal and external parameters of the camera, this way minimizing lens and viewpoint differences between a model view and a real camera image. Based on an automatically reconstructed free-form surface model we analyze the sensitivity of the tracking to the model accuracy, in particular the case when we represent curved surfaces by planar patches. We also examine accuracy and show on synthetic and on real data that the system does not suffer from drift accumulation. The wide field of view of the camera and the subdivision of our reference model into many textured free-form surface patches make the system robust against illumination changes, moving persons and other occlusions within the environment and provide a camera pose estimate in a fixed and known coordinate system.
In order to insert a virtual object into a TV image, the graphics system needs to know precisely how the camera is moving, so that the virtual object can be rendered in the correct place in every frame. Nowadays this can be achieved relatively easily in post-production, or in a studio equipped with a special tracking system. However, for live shooting on location, or in a studio that is not specially equipped, installing such a system can be difficult or uneconomic. To overcome these limitations, the MATRIS project is developing a real-time system for measuring the movement of a camera. The system uses image analysis to track naturally occurring features in the scene, and data from an inertial sensor. No additional sensors, special markers, or camera mounts are required. This paper gives an overview of the system and presents some results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.