Accurate measures of forest structural parameters are essential to forest inventory and growth models, managing wildfires, and modeling of carbon cycle. Terrestrial laser scanning (TLS) fills the gap between tree scale manual measurements and large scale airborne LiDAR measurements by providing accurate below crown information through non-destructive methods. This study developed innovative methods to extract individual tree height, diameter at breast height (DBH), and crown width of trees in East Texas. Further, the influence of scan settings, such as leaf-on/leaf-off seasons, tree distance from the scanner, and processing choices, on the accuracy of deriving tree measurements were also investigated. DBH was retrieved by cylinder fitting at different height bins. Individual trees were extracted from the TLS point cloud to determine tree heights and crown widths. The R-squared value ranged from 0.91 to 0.97 when field measured DBH was validated against TLS derived DBH using different methods. An accuracy of 92% (RMSE = 1.51 m) was obtained for predicting tree heights. The R-squared value was 0.84 and RMSE was 1.08 m when TLS derived crown widths were validated using field measured crown widths. Examples of underestimations of field measured forest structural parameters due to tree shadowing have also been discussed in this study. The results from this study will benefit foresters and remote sensing studies from airborne and spaceborne platforms, for map upscaling or calibration purposes, for aboveground biomass estimation, and prudent decision making by the forest management. OPEN ACCESSRemote Sens. 2015, 7 1878
Abstract:The United States Forest Service Forest Inventory and Analysis (FIA) Program provides a diverse selection of data used to assess the status of the nation's forests using sample locations dispersed throughout the country. Airborne laser scanning (ALS) systems are capable of producing accurate measurements of individual tree dimensions and also possess the ability to characterize forest structure in three dimensions. This study investigates the potential of discrete return ALS data for modeling forest aboveground biomass (AGBM) and gross volume (gV) at FIA plot locations in the Malheur National Forest, eastern Oregon utilizing three analysis levels: (1) individual subplot (r = 7.32 m); (2) plot, comprising four clustered subplots; and (3) hectare plot (r = 56.42 m). A methodology for the creation of three point cloud-based airborne LiDAR metric sets is presented. Models for estimating AGBM and gV based on LiDAR-derived height metrics were built and validated utilizing FIA estimates of AGBM and gV derived using regional allometric equations. Simple linear regression models based on the plot-level analysis out Results suggest that the current FIA plot design can be used with dense airborne LiDAR data to produce area-based estimates of AGBM and gV, and that the increased spatial scale of hectare plots may be inappropriate for modeling AGBM of gV unless exhaustive tree tallies are available. Overall, this study demonstrates that ALS data can be used to create models that describe the AGBM and gV of Pacific Northwest FIA plots and highlights the potential of estimates derived from ALS data to augment current FIA data collection procedures by providing a temporary intermediate estimation of AGBM and gV for plots with outdated field measurements.
Small unmanned aerial systems (UAS) have emerged as high-throughput platforms for the collection of high-resolution image data over large crop fields to support precision agriculture and plant breeding research. At the same time, the improved efficiency in image capture is leading to massive datasets, which pose analysis challenges in providing needed phenotypic data. To complement these high-throughput platforms, there is an increasing need in crop improvement to develop robust image analysis methods to analyze large amount of image data. Analysis approaches based on deep learning models are currently the most promising and show unparalleled performance in analyzing large image datasets. This study developed and applied an image analysis approach based on a SegNet deep learning semantic segmentation model to estimate sorghum panicles counts, which are critical phenotypic data in sorghum crop improvement, from UAS images over selected sorghum experimental plots. The SegNet model was trained to semantically segment UAS images into sorghum panicles, foliage and the exposed ground using 462, 250 × 250 labeled images, which was then applied to field orthomosaic to generate a field-level semantic segmentation. Individual panicle locations were obtained after post-processing the segmentation output to remove small objects and split merged panicles. A comparison between model panicle count estimates and manually digitized panicle locations in 60 randomly selected plots showed an overall detection accuracy of 94%. A per-plot panicle count comparison also showed high agreement between estimated and reference panicle counts (Spearman correlation ρ = 0.88, mean bias = 0.65). Misclassifications of panicles during the semantic segmentation step and mosaicking errors in the field orthomosaic contributed mainly to panicle detection errors. Overall, the approach based on deep learning semantic segmentation showed good promise and with a larger labeled dataset and extensive hyper-parameter tuning, should provide even more robust and effective characterization of sorghum panicle counts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.