Abstract. The use of heritage point cloud for documentation and dissemination purposes is nowadays increasing. The association of semantic information to 3D data by means of automated classification methods can help to characterize, describe and better interpret the object under study. In the last decades, machine learning methods have brought significant progress to classification procedures. However, the topic of cultural heritage has not been fully explored yet. This paper presents a research for the classification of heritage point clouds using different supervised learning approaches (Machine and Deep learning ones). The classification is aimed at automatically recognizing architectural components such as columns, facades or windows in large datasets. For each case study and employed classification method, different accuracy metrics are calculated and compared.
With recent advances in technology, 3D point clouds are getting more and more frequently requested and used, not only for visualization needs but also e.g. by public administrations for urban planning and management. 3D point clouds are also a very frequent source for generating 3D city models which became recently more available for many applications, such as urban development plans, energy evaluation, navigation, visibility analysis and numerous other GIS studies. While the main data sources remained the same (namely aerial photogrammetry and LiDAR), the way these city models are generated have been evolving towards automation with different approaches. As most of these approaches are based on point clouds with proper semantic classes, our aim is to classify aerial point clouds into meaningful semantic classes, e.g. ground level objects (GLO, including roads and pavements), vegetation, buildings' facades and buildings' roofs. In this study we tested and evaluated various machine learning algorithms for classification, including three deep learning algorithms and one machine learning algorithm. In the experiments, several hand-crafted geometric features depending on the dataset are used and, unconventionally, these geometric features are used also for deep learning. Figure 1. A LiDAR point cloud classified with our approach in 5 classes: buildings (red), powerline poles (orange), powerline cables (black), ground/soil (light green) and trees (dark green).
The increasing importance of three-dimensional (3D) city modelling is linked to these data’s different applications and advantages in many domains. Images and Light Detection and Ranging (LiDAR) data availability are now an evident and unavoidable prerequisite, not always verified for past scenarios. Indeed, historical maps are often the only source of information when dealing with historical scenarios or multi-temporal (4D) digital representations. The paper presents a methodology to derive 4D building models in the level of detail 1 (LoD1), inferring missing height information through machine learning techniques. The aim is to realise 4D LoD1 buildings for geospatial analyses and visualisation, valorising historical data, and urban studies. Several machine learning regression techniques are analysed and employed for deriving missing height data from digitised multi-temporal maps. The implemented method relies on geometric, neighbours, and categorical attributes for height prediction. Derived elevation data are then used for 4D building reconstructions, offering multi-temporal versions of the considered urban scenarios. Various evaluation metrics are also presented for tackling the common issue of lack of ground-truth information within historical data.
<p><strong>Abstract.</strong> 3D city modeling has become important over the last decades as these models are being used in different studies including, energy evaluation, visibility analysis, 3D cadastre, urban planning, change detection, disaster management, etc. Segmentation and classification of photogrammetric or LiDAR data is important for 3D city models as these are the main data sources, and, these tasks are challenging due to their complexity. This study presents research in progress, which focuses on the segmentation and classification of 3D point clouds and orthoimages to generate 3D urban models. The aim is to classify photogrammetric-based point clouds (&gt;<span class="thinspace"></span>30<span class="thinspace"></span>pts/sqm) in combination with aerial RGB orthoimages (~<span class="thinspace"></span>10<span class="thinspace"></span>cm, RGB image) in order to name buildings, ground level objects (GLOs), trees, grass areas, and other regions. If on the one hand the classification of aerial orthoimages is foreseen to be a fast approach to get classes and then transfer them from the image to the point cloud space, on the other hand, segmenting a point cloud is expected to be much more time consuming but to provide significant segments from the analyzed scene. For this reason, the proposed method combines segmentation methods on the two geoinformation in order to achieve better results.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.