In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.
The use of UAVs finds application in a variety of fields, among which are the small scale surveys for environmental protection application. In this frame some experimental tests were carried out at Politecnico di Milano to assess metric accuracies of images acquired by UAVs and derived photogrammetric products. A block of 5 strips and 49 photos was taken by fixed wing system SenseFly, carrying a camera Canon Ixus 220HS on a rural area included in an Italian Park. Images are processed through bundle adjustment, automatic DEM extraction and orthoimages production steps with several software packages, with the aim to evaluate their characteristics, capabilities and weaknesses. The software packages tested were Erdas-LPS, EyeDEA (University of Parma), Agisoft Photoscan, Pix4UAV, PhotoModeler Scanner. For the georeferencing of the block 16 pre-signalized ground control points were surveyed in the area through GPS (NRTK survey). Comparison of results is given in terms of differences among orientation parameters and their accuracies. Moreover, comparisons among different digital surface models are evaluated. Furthermore, exterior orientation parameters, image points and ground points coordinates, obtained by the various software packages, were used as initial values in a comparative adjustment made by scientific in-house software. Paper confirms that computer vision software are faster in computation and, even if their main goal is not to pursue high accuracy in points coordinates determination, they seems to produce results comparable to those obtainable with standard photogrammetric approach. Agisoft Photoscan seems in this case to yield the best results in terms of quality of photogrammetric products
Performing two independent surveys in 2016 and 2017 over a flat sample plot (6700 m 2 ), we compare snow-depth measurements from Unmanned-Aerial-System (UAS) photogrammetry and from a new high-resolution laser-scanning device (MultiStation) with manual probing, the standard technique used by operational services around the world. While previous comparisons already used laser scanners, we tested for the first time a MultiStation, which has a different measurement principle and is thus capable of millimetric accuracy. Both remote-sensing techniques measured point clouds with centimetric resolution, while we manually collected a relatively dense amount of manual data (135 pt in 2016 and 115 pt in 2017). UAS photogrammetry and the MultiStation showed repeatable, centimetric agreement in measuring the spatial distribution of seasonal, dense snowpack under optimal illumination and topographic conditions (maximum RMSE of 0.036 m between point clouds on snow). A large fraction of this difference could be due to simultaneous snowmelt, as the RMSE between UAS photogrammetry and the MultiStation on bare soil is equal to 0.02 m. The RMSE between UAS data and manual probing is in the order of 0.20-0.30 m, but decreases to 0.06-0.17 m when areas of potential outliers like vegetation or river beds are excluded. Compact and portable remote-sensing devices like UASs or a MultiStation can thus be successfully deployed during operational manual snow courses to capture spatial snapshots of snow-depth distribution with a repeatable, vertical centimetric accuracy.
ABSTRACT:UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor systems and to assess metric accuracies of images acquired by UAVs, a block of photos taken by a fixed wing system is triangulated with several software. The test field is a rural area included in an Italian Park ("Parco Adda Nord"), useful to study flight and imagery performances on buildings, roads, cultivated and uncultivated vegetation. The UAV SenseFly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130 m yielding a block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points, surveyed in the area through GPS (NRTK survey), allowed the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions and attitudes) were recorded by the flight control system. The block was processed with several software: Erdas-LPS, EyeDEA (Univ. of Parma), Agisoft Photoscan, Pix4UAV, in assisted or automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the various software were independently used as initial values in a comparative adjustment made by scientific in-house software, which can apply constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.
We compute the volume of flushed sediments in a dam using photogrammetry-based multi-temporal surveys with an unmanned aerial system (UAS). Coping with sediments accumulation and erosion in reservoir is a living topic in modern hydraulics of dams, since the increase of sediment may reduce the reservoir capacity, endanger dam's stability, and represent an economical loss. As a result, a number of remedies can be considered, such as flushing or mechanical removal. To evaluate the performance of these operations, measuring the volume of removed sediments and their spatial distribution is important. Here, we show that photogrammetry from UASs represents a suitable solution to reckon the volume of removed sediments. The case study is the Fusino dam (Lombardia region, Northern Italy). Two surveys were performed, before and after sediment removal. In both cases, the flight has been planned with an average flight height equal to 65 m, leading to a mean ground sample distance (GSD) equal to 0.013 m. The 22 ground control points (GCP) used to adjust the photogrammetric block were measured with both global navigation satellite system (GNSS) and a total station. Each survey produced a cloud of about 40 million of points. Moreover, the digital surface model (DSM) produced by each photogrammetric flight has been validated with sample points measured with a robotic total station. Results show high consistency between computed DSMs and validation dataset, with a mean height difference equal, respectively, to 0.003 and ¡0.004 m considering the two different surveys, with a standard deviation around 0.05 m in both the cases. The volume of sediments flushed was estimated to be about 26,000 m 3 , which represents about 2%À3% of the total reservoir capacity. We estimated also a 6% difference in terms of reservoir capacity between the present condition and the no-sediments condition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.