3D scene classification has become an important research field in photogrammetry, remote sensing, computer vision and robotics with the widespread usage of 3D point clouds. Point cloud classification, called semantic labeling, semantic segmentation, or semantic classification of point clouds is a challenging topic. Machine learning, on the other hand, is a powerful mathematical tool used to classify 3D point clouds whose content can be significantly complex. In this study, the classification performance of different machine learning algorithms in multiple scales was evaluated. The feature spaces of the points in the point cloud were created using the geometric features generated based on the eigenvalues of the covariance matrix. Eight supervised classification algorithms were tested in four different areas from three datasets (the Dublin City dataset, Vaihingen dataset and Oakland3D dataset). The algorithms were evaluated in terms of overall accuracy, precision, recall, F1 score and process time. The best overall results were obtained for four test areas with different algorithms. Dublin City Area 1 was obtained with Random Forest as 93.12%, Dublin City Area 2 was obtained with a Multilayer Perceptron algorithm as 92.78%, Vaihingen was obtained as 79.71% with Support Vector Machines and Oakland3D with Linear Discriminant Analysis as 97.30%.
Many different disciplines use deep learning algorithms for various purposes. In recent years, object detection by deep learning from aerial or terrestrial images has become a popular research area. In this study, object detection application was performed by training the YOLOv2 and YOLOv3 algorithms in the Google Colaboratory cloud service with the help of Python software language with the DOTA dataset consisting of aerial photographs. 43 aerial photographs containing 9 class objects were used for evaluation. These classes are large vehicle, small vehicle, plane, harbor, storage tank, ship, basketball court, tennis court and swimming pool. Accuracy analyzes of these two algorithms were made according to recall, precision and F1-score for nine classes, and the results were compared accordingly. YOLOv2 gave better results in 5 out of 9 classes, while YOLOv3 gave better results in recognizing small objects. While the best result with YOLOv2 was obtained in airplane class with 99% F1-score, the best result with YOLOv3 was obtained in swimming pool class with 83%. YOLOv2 can detect objects in an average photograph in 43 seconds, YOLOv3 has achieved superior performance in terms of time by detecting objects in an average of 2.5 seconds.
Mobile light detection and ranging (LiDAR) sensor point clouds are used in many fields such as road network management, architecture and urban planning, and 3D High Definition (HD) city maps for autonomous vehicles. Semantic segmentation of mobile point clouds is critical for these tasks. In this study, we present a robust and effective deep learning-based point cloud semantic segmentation method. Semantic segmentation is applied to range images produced from point cloud with spherical projection. Irregular 3D mobile point clouds are transformed into regular form by projecting the clouds onto the plane to generate 2D representation of the point cloud. This representation is fed to the proposed network that produces semantic segmentation. The local geometric feature vector is calculated for each point. Optimum parameter experiments were also performed to obtain the best results for semantic segmentation. The proposed technique, called SegUNet3D, is an ensemble approach based on the combination of U-Net and SegNet algorithms. SegUNet3D algorithm has been compared with five different segmentation algorithms on two challenging datasets. SemanticPOSS dataset includes the urban area, whereas RELLIS-3D includes the off-road environment. As a result of the study, it was demonstrated that the proposed approach is superior to other methods in terms of mean Intersection over Union (mIoU) in both datasets. The proposed method was able to improve the mIoU metric by up to 15.9% in the SemanticPOSS dataset and up to 5.4% in the RELLIS-3D dataset.
Solar energy is a renewable energy source directly from sunlight and its production depends on roof characteristics such as roof type and size. In solar potential analysis, the main purpose is to determine the suitable roofs for the placement of solar panels. Hence, roof plane detection plays a crucial role in solar energy assessment. In this study, a detailed comparison was presented between aerial photogrammetry data and LIDAR data for roof plane recognition applying RANSAC (Random Sample Consensus) algorithm. RANSAC algorithm was performed to 3D-point clouds obtained by both LIDAR (Laser Ranging and Detection) and aerial photogrammetric survey. In this regard, solar energy assessment from the results can be applied. It is shown that, the RANSAC algorithm detects building roofs better on the point cloud data acquired from airborne LIDAR regarding completeness within model, since aerial photogrammetric survey provides noisy data in spite of its high-density data. This noise in the source data leads to deformations in roof plane detection. The study area of the project is the campus of Istanbul Technical University. Accuracy information of the roof extraction of three different buildings are presented in tables.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.