Classification of aerial point clouds with high accuracy is significant for many geographical applications, but not trivial as the data are massive and unstructured. In recent years, deep learning for 3D point cloud classification has been actively developed and applied, but notably for indoor scenes. In this study, we implement the point-wise deep learning method Dynamic Graph Convolutional Neural Network (DGCNN) and extend its classification application from indoor scenes to airborne point clouds. This study proposes an approach to provide cheap training samples for point-wise deep learning using an existing 2D base map. Furthermore, essential features and spatial contexts to effectively classify airborne point clouds colored by an orthophoto are also investigated, in particularly to deal with class imbalance and relief displacement in urban areas. Two airborne point cloud datasets of different areas are used: Area-1 (city of Surabaya—Indonesia) and Area-2 (cities of Utrecht and Delft—the Netherlands). Area-1 is used to investigate different input feature combinations and loss functions. The point-wise classification for four classes achieves a remarkable result with 91.8% overall accuracy when using the full combination of spectral color and LiDAR features. For Area-2, different block size settings (30, 50, and 70 m) are investigated. It is found that using an appropriate block size of, in this case, 50 m helps to improve the classification until 93% overall accuracy but does not necessarily ensure better classification results for each class. Based on the experiments on both areas, we conclude that using DGCNN with proper settings is able to provide results close to production.
Commission I, WG I/4KEY WORDS: VHR systematic-ortho, orthorectified image, GCP, DEM, accuracy ABSTRACT:The Very High Resolution (VHR) satellite imageries such us Pleiades, WorldView-2, GeoEye-1 used for precise mapping purpose must be corrected from any distortion to achieve the expected accuracy. Orthorectification is performed to eliminate geometric errors of the VHR satellite imageries. Orthorectification requires main input data such as Digital Elevation Model (DEM) and Ground Control Point (GCP). The VHR systematic-ortho imageries were generated using SRTM 30m DEM without using any GCP data. The accuracy value differences of VHR systematic-ortho imageries and VHR orthorectified imageries using GCP currently is not exactly defined. This study aimed to identified the accuracy comparison of VHR systematic-ortho imageries against orthorectified imageries using GCP. Orthorectified imageries using GCP created by using Rigorous model. Accuracy evaluation is calculated by using several independent check points.
Abstract. Automation of 3D LiDAR point cloud processing is expected to increase the production rate of many applications including automatic map generation. Fast development on high-end hardware has boosted the expansion of deep learning research for 3D classification and segmentation. However, deep learning requires large amount of high quality training samples. The generation of training samples for accurate classification results, especially for airborne point cloud data, is still problematic. Moreover, which customized features should be used best for segmenting airborne point cloud data is still unclear. This paper proposes semi-automatic point cloud labelling and examines the potential of combining different tailor-made features for pointwise semantic segmentation of an airborne point cloud. We implement a Dynamic Graph CNN (DGCNN) approach to classify airborne point cloud data into four land cover classes: bare-land, trees, buildings and roads. The DGCNN architecture is chosen as this network relates two approaches, PointNet and graph CNNs, to exploit the geometric relationships between points. For experiments, we train an airborne point cloud and co-aligned orthophoto of the Surabaya city area of Indonesia to DGCNN using three different tailor-made feature combinations: points with RGB (Red, Green, Blue) color, points with original LiDAR features (Intensity, Return number, Number of returns) so-called IRN, and points with two spectral colors and Intensity (Red, Green, Intensity) so-called RGI. The overall accuracy of the testing area indicates that using RGB information gives the best segmentation results of 81.05% while IRN and RGI gives accuracy values of 76.13%, and 79.81%, respectively.
ABSTRACT:Digital elevation model serves to illustrate the appearance of the earth's surface. DEM can be produced from a wide variety of data sources including from radar data, LiDAR data, and stereo satellite imagery. Making the LiDAR DEM conducted using point cloud data from LiDAR sensor. Making a DEM from stereo satellite imagery can be done using same temporal or multitemporal stereo satellite imagery. How much the accuracy of DEM generated from multitemporal stereo stellite imagery and LiDAR data is not known with certainty. The study was conducted using LiDAR DEM data and multitemporal stereo satellite imagery DEM. Multitemporal stereo satellite imagery generated semi-automatically by using 3 scene stereo satellite imagery with acquisition 2013-2014. The high value given each of DEM serve as the basis for calculating high accuracy DEM respectively. The results showed the high value differences in the fraction of the meter between LiDAR DEM and multitemporal stereo satellite imagery DEM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.