Tropical forest canopies are comprised of tree crowns of multiple species varying in shape and height, and ground inventories do not usually reliably describe their structure. Airborne laser scanning data can be used to characterize these individual crowns, but analytical tools developed for boreal or temperate forests may require to be adjusted before they can be applied to tropical environments. Therefore, we compared results from six different segmentation methods applied to six plots (39 ha) from a study site in French Guiana. We measured the overlap of automatically segmented crowns projection with selected crowns manually delineated on high-resolution photography. We also evaluated the goodness of fit following automatic matching with field inventory data using a model linking tree diameter to tree crown width. The different methods tested in this benchmark segmented highly different numbers of crowns having different characteristics. Segmentation methods based on the point cloud (AMS3D and Graph-Cut) globally outperformed methods based on the Canopy Height Models, especially for small crowns; the AMS3D method outperformed the other methods tested for the overlap analysis, and AMS3D and Graph-Cut performed the best for the automatic matching validation. Nevertheless, other methods based on the Canopy Height Model performed better for very large emergent crowns. The dense foliage of tropical moist forests prevents sufficient point densities in the understory to segment subcanopy trees accurately, regardless of the segmentation method.
Airborne LiDAR point cloud representing a forest contains 3D data, from which vertical stand structure even of under-story layers can be derived. This paper presents a tree segmentation approach for multistory stands that stratifies the point cloud to canopy layers and segments individual tree crowns within each layer using a digital surface model based tree segmentation method. The novelty of the approach is the stratification procedure that separates the point cloud to an over-story and multiple under-story tree canopy layers by analyzing vertical distributions of LiDAR points within overlapping locales. Unlike previous work that stripped stiff layers within a constrained area, the procedure stratifies the point cloud to flexible tree canopy layers over an unconstrained area with minimal over/under-segmentations of tree crowns across the layers. The procedure does not make a priori assumptions about the shape and size of the tree crowns and can, independent of the tree segmentation method, be utilized to vertically stratify tree crowns of forest canopies with a variety of stand structures. We applied the proposed approach to the University of Kentucky Robinson Forest -a natural deciduous forest with complex terrain and vegetation structure. The segmentation results showed that using the stratification procedure strongly improved detecting under-story trees (from 46% to 68%) at the cost of introducing a fair number of over-segmented under-story trees (increased from 1% to 16%), while barely affecting the segmentation quality of overstory trees. Results of vertical stratification of canopy showed that the point density of under-story canopy layers were suboptimal for performing reasonable tree segmentation, suggesting that acquiring denser LiDAR point clouds (becoming affordable due to advancements of the sensor technology and platforms) would allow more improvements in segmenting under-story trees.
Abstract.This paper presents a non-parametric approach for segmenting trees from airborne LiDAR data in deciduous forests. Based on the LiDAR point cloud, the approach collects crown information such as steepness and height on-the-fly to delineate crown boundaries, and most importantly, does not require a priori assumptions of crown shape and size. The approach segments trees iteratively starting from the tallest within a given area to the smallest until all trees have been segmented. To evaluate its performance, the approach was applied to the University of Kentucky Robinson Forest, a deciduous closed-canopy forest with complex terrain and vegetation conditions. The approach identified 94% of dominant and co-dominant trees with a false detection rate of 13%. About 62% of intermediate, overtopped, and dead trees were also detected with a false detection rate of 15%. The overall segmentation accuracy was 77%.Correlations of the segmentation scores of the proposed approach with local terrain and stand metrics was not significant, which is likely an indication of the robustness of the approach as results are not sensitive to the differences in terrain and stand structures.
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
The purpose of this study was to investigate the use of deep learning for coniferous/deciduous classification of individual trees from airborne LiDAR data. To enable efficient processing by a deep convolutional neural network (CNN), we designed two discrete representations using leaf-off and leaf-on LiDAR data: a digital surface model with four channels (DSM×4) and a set of four 2D views (4×2D). A training dataset of labeled tree crowns was generated via segmentation of tree crowns, followed by coregistration with field data. Potential mislabels due to GPS error or tree leaning were corrected using a statistical ensemble filtering procedure. Because the training data was heavily unbalanced (~8% conifers), we trained an ensemble of CNNs on random balanced sub-samples of augmented data (180 rotational variations per instance). The 4×2D representation yielded similar classification accuracies to the DSM×4 representation (~82% coniferous and ~90% deciduous) while converging faster. The data augmentation improved the classification accuracies, but more real training instances (especially coniferous) likely results in much stronger improvements. Leaf-off LiDAR data were the primary source of useful information, which is likely due to the perennial nature of coniferous foliage. LiDAR intensity values also proved to be useful, but normalization yielded no significant improvements. As we observed, large training data may compensate for the lack of a subset of important domain data. Lastly, the classification accuracies of overstory trees (~90%) were more balanced than those of understory trees (~90% deciduous and ~65% coniferous), which is likely due to the incomplete capture of understory tree crowns via airborne LiDAR. Automatic derivation of optimal features via deep learning provide the opportunity for remarkable improvements in prediction tasks where captured data are not friendly to human visual systemlikely yielding sub-optimal human-designed features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.