Green (GV) and non-photosynthetic vegetation (NPV) cover are both important biophysical parameters for grassland research. The current methodology for cover estimation, including subjective visual estimation and digital image analysis, requires human intervention, lacks automation, batch processing capabilities and extraction accuracy. Therefore, this study proposed to develop a method to quantify both GV and standing dead matter (SDM) fraction cover from field-taken digital RGB images with semi-automated batch processing capabilities (i.e., written as a python script) for mixed grasslands with more complex background information including litter, moss, lichen, rocks and soil. The results show that the GV cover extracted by the method developed in this study is superior to that by subjective visual estimation based on the linear relation with normalized vegetation index (NDVI) calculated from field measured hyper-spectra (R2 = 0.846, p < 0.001 for GV cover estimated from RGB images; R2 = 0.711, p < 0.001 for subjective visual estimated GV cover). The results also show that the developed method has great potential to estimate SDM cover with limited effects of light colored understory components including litter, soil crust and bare soil. In addition, the results of this study indicate that subjective visual estimation tends to estimate higher cover for both GV and SDM compared to that estimated from RGB images.
Algal bloom is a serious global issue for inland waters, posing pose a serious threat to aquatic ecosystems. The timely and accurate detection of algal blooms is critical for their control, management and forecasting. Optical satellite imagery with short revisit times has been widely used to monitor algal blooms in marine and large inland waters. However, such images typically are of coarse resolution, limiting their utility to map algal blooms in small inland waters. We developed a new method to map the spatial extent of algal blooms using Sentinel-2 MSI and Landsat OLI images with higher spatial resolution but lower temporal resolution based on the concept of Local Indicator of Spatial Association. The mapping results was applied to measure the duration and frequency of algal blooms in Lake Taihu from 2017 to 2020. Our results show that the developed methodology is able to extract the spatial distribution of moderate algal blooms using near-infrared and red-edge bands (band 6, 7, 8 and 8a of Sentinel-2 MSI images or band 5 of Landsat OLI images) by comparison with MODIS FAI data (R2 = 0.888 for Sentinel-2 MSI and R2 = 0.85 for Landsat OLI, P < 0.05). However, the temporal resolution of combined Landsat OLI and Sentinel-2 MSI images (i.e., up to 2-3 days) is insufficient to monitor algal blooms during the summer time in Lake Taihu due to cloud effects and rapid algal change. Our research has benefits for the management of small inland waters with complex water conditions.
This article proposes to use the relative distances between adjacent envelope peaks detected in stereo audio as fingerprints for copy identification. The matching algorithm used is the rough longest common subsequence (RLCS) algorithm. The experimental results show that the proposed approach has better identification accuracy than an MPEG-7 based scheme for distorted and noisy audio. When compared with other schemes, the proposed scheme uses fewer bits with comparable performance. The proposed fingerprints can also be used in conjunction with the MPEG-7 based scheme for lower computational burden.
Canopy closure (CC), a useful biophysical parameter for forest structure, is an important indicator of forest resource and biodiversity. Light Detection and Ranging (LiDAR) data has been widely studied recently for forest ecosystems to obtain the three-dimensional (3D) structure of the forests. The components of the Unmanned Aerial Vehicle LiDAR (UAV-LiDAR) are similar to those of the airborne LiDAR, but with higher pulse density, which reveals more detailed vertical structures. Hemispherical photography (HP) had proven to be an effective method for estimating CC, but it was still time-consuming and limited in large forests. Thus, we used UAV-LiDAR data with a canopy-height-model-based (CHM-based) method and a synthetic-hemispherical-photography-based (SHP-based) method to extract CC from a pure poplar plantation in this study. The performance of the CC extraction methods based on an angular viewpoint was validated by the results of HP. The results showed that the CHM-based method had a high accuracy in a 45° zenith angle range with a 0.5 m pixel size and a larger radius (i.e., k = 2; R2 = 0.751, RMSE = 0.053), and the accuracy declined rapidly in zenith angles of 60° and 75° (R2 = 0.707, 0.490; RMSE = 0.053, 0.066). In addition, the CHM-based method showed an underestimate for leaf-off deciduous trees with low CC. The SHP-based method also had a high accuracy in a 45° zenith angle range, and its accuracy was stable in three zenith angle ranges (R2: 0.688, 0.674, 0.601 and RMSE = 0.059, 0.056, 0.058 for a 45°, 60° and 75° zenith angle range, respectively). There was a similar trend of CC change in HP and SHP results with the zenith angle range increase, but there was no significant change with the zenith angle range increase in the CHM-based method, which revealed that it was insensitive to the changes of angular CC compared to the SHP-based method. However, the accuracy of both methods showed differences in plantations with different ages, which had a slight underestimate for 8-year-old plantations and an overestimate for plantations with 17 and 20 years. Our research provided a reference for CC estimation from a point-based angular viewpoint and for monitoring the understory light conditions of plantations.
Accurate and efficient estimation of forest volume or biomass is critical for carbon cycles, forest management, and the timber industry. Individual tree detection and segmentation (ITDS) is the first and key step to ensure the accurate extraction of detailed forest structure parameters from LiDAR (light detection and ranging). However, ITDS is still a challenge to achieve using UAV-LiDAR (LiDAR from Unmanned Aerial Vehicles) in broadleaved forests due to the irregular and overlapped canopies. We developed an efficient and accurate ITDS framework for broadleaved forests based on UAV-LiDAR point clouds. It involves ITD (individual tree detection) from point clouds taken during the leaf-off season, initial ITS (individual tree segmentation) based on the seed points from ITD, and improvement of initial ITS through a refining process. The results indicate that this new proposed strategy efficiently provides accurate results for ITDS. We show the following: (1) point-cloud-based ITD methods, especially the Mean Shift, perform better for seed point selection than CHM-based (Canopy Height Model) ITD methods on the point clouds from leaf-off seasons; (2) seed points significantly improved the accuracy and efficiency of ITS algorithms; (3) the refining process using DBSCAN (density-based spatial clustering of applications with noise) and kNN (k-Nearest Neighbor classifier) classification significantly reduced edge errors in ITS results. Our study developed a novel ITDS strategy for UAV-LiDAR point clouds that demonstrates proficiency in dense deciduous broadleaved forests, and this proposed ITDS framework could be applied to single-phase point clouds instead of the multi-temporal LiDAR data in the future if the point clouds have detailed tree trunk points.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.