No abstract
Skyline planning: accuracy of singletree detection with dronegenerated aerial photos Information collected with drones has the potential to simplify the planning and marking of skylines. The prerequisite for this is that 1) the tree coordinates of potential intermediate support and anchor trees and 2) their diameter at breast height (DBH) can be determined with sufficient accuracy in the surface model obtained from the drone data. To analyse the achievable accuracies, two Swiss marteloscopes were surveyed with a senseFly eBee Classic drone at a height of 180 m and aerial photos were taken with a Sony camera with a resolution of 18.2 megapixels. In the resulting normalised surface model (nDOM) the local maxima (treetops) were determined with the single-tree detection software “FINT”. For these treetops the coordinates and the DBH were determined. The detection rate for both marteloscopes was 65%. On average, the coordinates deviated less than 1.4 m from the terrestrial reference tree coordinates. The predominant and dominant trees could be located even more precisely. The DBH was determined with an average accuracy of 5 cm. A practical test with nine skylines showed that the coordinates were accurate enough to use the support trees determined in the nDOM for the technical realisation of the skylines. However, an on-site inspection is still necessary to check the potential intermediate support trees for damages, invisible in the aerial view.
Secara global, terdapat berbagai macam sistem dan legitimasi hukum yang berbeda dalam hal kepemilikan lahan dan cara-cara mendapatkannya. Berdasarkan penelitian, kami menyarankan agar dalam setiap masalah khusus yang menyangkut kepemilikan dan tata guna lahan, persoalan penting yang terkait dengan PHSL adalah (bandingkan dengan Colfer dkk. 1997b, 1988). keamanan akses antargenerasi terhadap sumber daya
<p>Urban landscapes are characterized as the fastest changing areas on the planet. However, regularly monitoring of larger areas it is not feasible using UAVs or costly air borne data. In these situations, satellite data with a high temporal resolution and large field of view are more appropriate but suffer from the lower spatial resolution (deca-meters). In the present study we show that by using freely available Sentinel-2 data from the Copernicus program, we can extract anthropogenic features such as roads, railways and building footprints that are partly or completely on a sub-pixel level in this kind of data. Additionally, we propose a new metric for the evaluation of our methods on the sub-pixel objects. This metric measures the performance of the detection of an object while penalizing the false positive classification. Given that our training samples contain one class, we define two thresholds that represent the lower bound of accuracy for the object to be classified and the background. We thus avoid a good score in occasions where we classify correctly our object, but a wide area of the background has been included in our prediction. We investigate the performance of different deep-learning architectures for sub-pixel classification of the different infrastructure elements based on Sentinel-2 multispectral data and the labels derived from the UAV data. Our study area is located in the Rhone valley in Switzerland where very high-resolution UAV data was available from the University of Applied Sciences. Highly accurate labels for the respective classes were digitized in ArcGIS Pro and used as ground-truth for the Sentinel data. We trained different deep learning models based on state-of-the-art architectures for semantic segmentation, such as DeepLab and U-Net. Our approach focuses on the exploitation of the multi spectral information to increase the performance of the RGB channels. For that purpose, we make use of the NIR and SWIR 10m and 20m bands of the Sentinel-2 data. We investigate early and late fusion approaches and the behavior and contribution of each multi spectral band to improve the performance in comparison to only using the RGB channels. In the early fusion approach, we stack nine (RGB, NIR, SWIR) Sentinel-2 bands together, pass them from two convolutions followed by batch normalization and relu layers and then feed the tiles to DeepLab. In the late fusion approach, we create a CNN with two branches with the first branch processing the RGB channels and the second branch the NIR/SWIR bands. We use modified DeepLab layers for the two branches and then concatenate the outputs into a total output of 512 feature maps. We then reduce the dimensionality of the result into the finaloutput equal to the number of classes. The dimension reduction step happens in two convolution layers. We experiment on different settings for all of the mentioned architectures. In the best-case scenario, we achieve 89% overall accuracy. Moreover, we measure 60% building accuracy, streets accuracy 60%, railway accuracy 73%, river accuracy 92% and background accuracy 94%.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.