The extraction of information about individual trees is essential to supporting the growing of fruit in orchard management. Data acquired from spectral sensors mounted on unmanned aerial vehicles (UAVs) have very high spatial and temporal resolution. However, an efficient and reliable method for extracting information about individual trees with irregular tree-crown shapes and a complicated background is lacking. In this study, we developed and tested the performance of an approach, based on UAV imagery, to extracting information about individual trees in an orchard with a complicated background that includes apple trees (Plot 1) and pear trees (Plot 2). The workflow involves the construction of a digital orthophoto map (DOM), digital surface models (DSMs), and digital terrain models (DTMs) using the Structure from Motion (SfM) and Multi-View Stereo (MVS) approaches, as well as the calculation of the Excess Green minus Excess Red Index (ExGR) and the selection of various thresholds. Furthermore, a local-maxima filter method and marker-controlled watershed segmentation were used for the detection and delineation, respectively, of individual trees. The accuracy of the proposed method was evaluated by comparing its results with manual estimates of the numbers of trees and the areas and diameters of tree-crowns, all three of which parameters were obtained from the DOM. The results of the proposed method are in good agreement with these manual estimates: The F-scores for the estimated numbers of individual trees were 99.0% and 99.3% in Plot 1 and Plot 2, respectively, while the Producer’s Accuracy (PA) and User’s Accuracy (UA) for the delineation of individual tree-crowns were above 95% for both of the plots. For the area of individual tree-crowns, root-mean-square error (RMSE) values of 0.72 m2 and 0.48 m2 were obtained for Plot 1 and Plot 2, respectively, while for the diameter of individual tree-crowns, RMSE values of 0.39 m and 0.26 m were obtained for Plot 1 (339 trees correctly identified) and Plot 2 (203 trees correctly identified), respectively. Both the areas and diameters of individual tree-crowns were overestimated to varying degrees.
Abstract-The classification of tree species through remote sensing data is of great significance to monitoring forest disturbances, biodiversity assessment, and carbon estimation. The dense time series and a wide swath of Sentinel-2 data provided the opportunity to map tree species accurately and in a timely manner over a large area. Many current studies have applied machine learning (ML) algorithms combined with Sentinel-2 images to classify tree species, but it is still unclear which algorithm is more effective in the automotive extraction of tree species. In this study, five machine learning algorithms were compared to identify the composition of tree species with multi-temporal Sentinel-2 images in the JianShe forest farm, Northeast China. Three major types of deep neural networks (Conv1D, AlexNet and LSTM) were tested to classify Sentinel-2 time series, which represent three disparate but effective strategies to apply sequential data. The other two models are Support Vector Machine (SVM) and Random Forest (RF), which are renowned for extensive adoption and high performance for various remote sensing applications. The results show that the overall accuracy of neural network models is better than that of SVM and RF. The Conv1D model had the highest classification accuracy (84.19%), followed by the LSTM model (81.52%), and the AlexNet model (76.02%). For non-neural network models, RF's classification accuracy (79.04%) is higher than that of SVM (72.79%), but lower than that of Conv1D and LSTM. Therefore, the deep neural networks combined with multitemporal Sentinel-2 images can efficiently improve the accuracy of tree species classification.
The accurate mapping of urban impervious surfaces from remote sensing images is crucial for understanding urban land-cover change and addressing impervious-surface-change-related environment issues. To date, the authors of most studies have built indices to map impervious surfaces based on shortwave infrared (SWIR) or thermal infrared (TIR) bands from middle–low-spatial-resolution remote sensing images. However, this limits the use of high-spatial-resolution remote sensing data (e.g., GaoFen-2, Quickbird, and IKONOS). In addition, the separation of bare soil and impervious surfaces has not been effectively solved. In this article, on the basis of the spectra analysis of impervious surface and non-impervious surface (vegetation, water, soil and non-photosynthetic vegetation (NPV)) data acquired from world-recognized spectral libraries and Sentinel-2 MSI images in different regions and seasons, a novel spectral index named the Normalized Impervious Surface Index (NISI) was proposed for extracting impervious area information by using blue, green, red and near-infrared (NIR) bands. We performed comprehensive assessments for the NISI, and the results demonstrated that the NISI provided the best studied performance in separating the soil and impervious surfaces from Sentinel-2 MSI images. Furthermore, regarding impervious surfaces mapping accuracy, the NISI had an overall accuracy (OA) of 89.28% (±0.258), a producer’s accuracy (PA) of 89.76% (±1.754), and a user’s accuracy (UA) of 90.68% (±1.309), which were higher than those of machine learning algorithms, thus supporting the NISI as an effective measurement for urban impervious surfaces mapping and analysis. The results indicate the NISI has a high robustness and a good applicability.
Background Traditional fracture reduction surgery cannot ensure the accuracy of the reduction while consuming the physical strength of the surgeon. Although monitoring the fracture reduction process through radiography can improve the accuracy of the reduction, it will bring radiation harm to both patients and surgeons. Methods We proposed a novel fracture reduction solution that parallel robot is used for fracture reduction surgery. The binocular camera indirectly obtains the position and posture of the fragment wrapped by the tissue by measuring the posture of the external markers. According to the clinical experience of fracture reduction, a path is designed for fracture reduction. Then using position‐based visual serving control the robot to fracture reduction surgery. The study is approved by the ethics committee of the Rehabilitation Hospital, National Research Center for Rehabilitation Technical Aids, Beijing, China. Results Ten virtual cases of fracture were used for fracture reduction experiments. The simulation and model bone experiments are designed respectively. In model bone experiments, the fragments are reduced without collision. The angulation error after the reduction of this method is 3.3° ± 1.8°, and the axial rotation error is 0.8° ± 0.3°, the transverse stagger error and the axial direction error after reduction is 2 ± 0.5 mm and 2.5 ± 1 mm. After the reduction surgery, the external fixator is used to assist the fixing, and the deformity will be completely corrected. Conclusions The solution can perform fracture reduction surgery with certain accuracy and effectively reduce the number of radiographic uses during surgery, and the collision between fragments is avoided during surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.