LiDAR point clouds are receiving a growing interest in remote sensing as they provide rich information to be used independently or together with optical data sources such as aerial imagery. However, their non-structured and sparse nature make them difficult to handle, conversely to raw imagery for which many efficient tools are available. To overcome this specific nature of LiDAR point clouds, standard approaches often rely in converting the point cloud into a digital elevation model, represented as a 2D raster. Such a raster can then be used similarly as optical images, e.g. with 2D convolutional neural networks for semantic segmentation. In this letter, we show that LiDAR point clouds provide more information than only the DEM, and that considering alternative rasterization strategies helps to achieve better semantic segmentation results. We illustrate our findings on the IEEE DFC 2018 dataset.
This paper evaluates rasterization strategies and the benefit of hierarchical representations, in particular attribute profiles, to classify urban scenes issued from multispectral LiDAR acquisitions. In recent years it has been found that rasterized LiDAR provides a reliable source of information on its own or for fusion with multispectral/hyperspectral imagery. However previous works using attribute profiles on LiDAR rely on elevation data only. Our approach focuses on several LiDAR features rasterized with multilevel description to produce precise land cover maps over urban areas. Our experimental results obtained with LiDAR data from university of Houston indicate good classification results for alternative rasters and even more when multilevel image descriptions are used.
Despite the popularity of deep neural networks in various domains, the extraction of digital terrain models (DTMs) from airborne laser scanning (ALS) point clouds is still challenging. This might be due to the lack of dedicated large-scale annotated dataset and the data-structure discrepancy between point clouds and DTMs. To promote data-driven DTM extraction, this paper collects from open sources a large-scale dataset of ALS point clouds and corresponding DTMs with various urban, forested, and mountainous scenes. A baseline method is proposed as the first attempt to train a Deep neural network to extract digital Terrain models directly from ALS point clouds via Rasterization techniques, coined DeepTerRa. Extensive studies with well-established methods are performed to benchmark the dataset and analyze the challenges in learning to extract DTM from point clouds. The experimental results show the interest of the agnostic data-driven approach, with sub-metric error level compared to methods designed for DTM extraction. The data and source code is provided at https://lhoangan.github.io/deepterra/ for reproducibility and further similar research.
This paper deals with morphological characterization of unstructured 3D point clouds issued from LiDAR data. A large majority of studies first rasterize 3D point clouds onto regular 2D grids and then use standard 2D image processing tools for characterizing data. In this paper, we suggest instead to keep the 3D structure as long as possible in the process. To this end, as raw LiDAR point clouds are unstructured, we first propose some voxelization strategies and then extract some morphological features on voxel data. The results obtained with attribute filtering show the ability of this process to efficiently extract useful information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.