In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.
Abstract. We investigate snow depth distribution at peak accumulation over a small Alpine area ( ∼ 0.3 km 2 ) using photogrammetry-based surveys with a fixed-wing unmanned aerial system (UAS). These devices are growing in popularity as inexpensive alternatives to existing techniques within the field of remote sensing, but the assessment of their performance in Alpine areas to map snow depth distribution is still an open issue. Moreover, several existing attempts to map snow depth using UASs have used multi-rotor systems, since they guarantee higher stability than fixed-wing systems. We designed two field campaigns: during the first survey, performed at the beginning of the accumulation season, the digital elevation model of the ground was obtained. A second survey, at peak accumulation, enabled us to estimate the snow depth distribution as a difference with respect to the previous aerial survey. Moreover, the spatial integration of UAS snow depth measurements enabled us to estimate the snow volume accumulated over the area. On the same day, we collected 12 probe measurements of snow depth at random positions within the case study to perform a preliminary evaluation of UAS-based snow depth. Results reveal that UAS estimations of point snow depth present an average difference with reference to manual measurements equal to −0.073 m and a RMSE equal to 0.14 m. We have also explored how some basic snow depth statistics (e.g., mean, standard deviation, minima and maxima) change with sampling resolution (from 5 cm up to ∼ 100 m): for this case study, snow depth standard deviation (hence coefficient of variation) increases with decreasing cell size, but it stabilizes for resolutions smaller than 1 m. This provides a possible indication of sampling resolution in similar conditions.
The use of UAVs finds application in a variety of fields, among which are the small scale surveys for environmental protection application. In this frame some experimental tests were carried out at Politecnico di Milano to assess metric accuracies of images acquired by UAVs and derived photogrammetric products. A block of 5 strips and 49 photos was taken by fixed wing system SenseFly, carrying a camera Canon Ixus 220HS on a rural area included in an Italian Park. Images are processed through bundle adjustment, automatic DEM extraction and orthoimages production steps with several software packages, with the aim to evaluate their characteristics, capabilities and weaknesses. The software packages tested were Erdas-LPS, EyeDEA (University of Parma), Agisoft Photoscan, Pix4UAV, PhotoModeler Scanner. For the georeferencing of the block 16 pre-signalized ground control points were surveyed in the area through GPS (NRTK survey). Comparison of results is given in terms of differences among orientation parameters and their accuracies. Moreover, comparisons among different digital surface models are evaluated. Furthermore, exterior orientation parameters, image points and ground points coordinates, obtained by the various software packages, were used as initial values in a comparative adjustment made by scientific in-house software. Paper confirms that computer vision software are faster in computation and, even if their main goal is not to pursue high accuracy in points coordinates determination, they seems to produce results comparable to those obtainable with standard photogrammetric approach. Agisoft Photoscan seems in this case to yield the best results in terms of quality of photogrammetric products
Performing two independent surveys in 2016 and 2017 over a flat sample plot (6700 m 2 ), we compare snow-depth measurements from Unmanned-Aerial-System (UAS) photogrammetry and from a new high-resolution laser-scanning device (MultiStation) with manual probing, the standard technique used by operational services around the world. While previous comparisons already used laser scanners, we tested for the first time a MultiStation, which has a different measurement principle and is thus capable of millimetric accuracy. Both remote-sensing techniques measured point clouds with centimetric resolution, while we manually collected a relatively dense amount of manual data (135 pt in 2016 and 115 pt in 2017). UAS photogrammetry and the MultiStation showed repeatable, centimetric agreement in measuring the spatial distribution of seasonal, dense snowpack under optimal illumination and topographic conditions (maximum RMSE of 0.036 m between point clouds on snow). A large fraction of this difference could be due to simultaneous snowmelt, as the RMSE between UAS photogrammetry and the MultiStation on bare soil is equal to 0.02 m. The RMSE between UAS data and manual probing is in the order of 0.20-0.30 m, but decreases to 0.06-0.17 m when areas of potential outliers like vegetation or river beds are excluded. Compact and portable remote-sensing devices like UASs or a MultiStation can thus be successfully deployed during operational manual snow courses to capture spatial snapshots of snow-depth distribution with a repeatable, vertical centimetric accuracy.
In the frame of project FoGLIE (Fruition of Goods and Landscape in Interactive Environment), UAS were used to survey a park area in its less accessible zones, for scenic and stereoscopic videos, 3D modeling and vegetation monitoring. For this last application, specifically, through the acquisition of very high resolution images taken with two UAS-borne compact cameras (RGB and NIR), a DSM of a small vegetated area and the corresponding orthoimages were produced and co-registered. Planimetric and height accuracies in block adjustments and orthophotos are in the range of 0.10 m horizontally and 0.15 m in height. Then, after the derivation of synthetic channels, both unsupervised classification and supervised one were performed in order to test the algorithms' ability to distinguish between different bushes and trees species: some of them were correctly classified by the latter method but misclassifications still remain. The overall accuracy for the unsupervised classification is about 50% while the supervised one yields an overall accuracy around 80%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.