Research Highlights: This paper proposes a new method for hemispherical forest canopy image segmentation. The method is based on a deep learning methodology and provides a robust and fully automatic technique for the segmentation of forest canopy hemispherical photography (CHP) and gap fraction (GF) calculation. Background and Objectives: CHP is widely used to estimate structural forest variables. The GF is the most important parameter for calculating the leaf area index (LAI), and its calculation requires the binary segmentation result of the CHP. Materials and Methods: Our method consists of three modules, namely, northing correction, valid region extraction, and hemispherical image segmentation. In these steps, a core procedure is hemispherical canopy image segmentation based on the U-Net convolutional neural network. Our method is compared with traditional threshold methods (e.g., the Otsu and Ridler methods), a fuzzy clustering method (FCM), commercial professional software (WinSCANOPY), and the Habitat-Net network method. Results: The experimental results show that the method presented here achieves a Dice similarity coefficient (DSC) of 89.20% and an accuracy of 98.73%. Conclusions: The method presented here outperforms the Habitat-Net and WinSCANOPY methods, along with the FCM, and it is significantly better than the Otsu and Ridler threshold methods. The method takes the original canopy hemisphere image first and then automatically executes the three modules in sequence, and finally outputs the binary segmentation map. The method presented here is a pipelined, end-to-end method.
Hand bone age, as the biological age of humans, can accurately reflect the development level and maturity of individuals. Bone age assessment results of adolescents can provide a theoretical basis for their growth and development and height prediction. In this study, a deep convolutional neural network (CNN) model based on fine-grained image classification is proposed, using a hand bone image dataset provided by the Radiological Society of North America (RSNA) as the research object. This model can automatically locate informative regions and extract local features in the process of hand bone image recognition, and then, the extracted local features are combined with global features of a complete image for bone age classification. This method can achieve end-to-end bone age assessment without any image annotation information (except bone age tags), improving the speed and accuracy of bone age assessment. Experimental results show that the proposed method achieves 66.38% and 68.63% recognition accuracy of males and females on the RSNA dataset, and the mean absolute errors are 3.71 ± 7.55 and 3.81 ± 7.74 months for males and females, respectively. The test time for each image is approximately 35 ms. This method achieves good performance and outperforms existing methods in bone age assessment based on weakly supervised fine-grained image classification.INDEX TERMS Bone age assessment, Deep learning, Convolutional neural network, Fine-grained image.
Aim: This study aimed to automatically implement liver disease quantification (DQ) in lymphoma using CT images without lesion segmentation. Background: Computed Tomography (CT) imaging manifestations of liver lymphoma include diffuse infiltration, blurred boundaries, vascular drift signs, and multiple lesions, making liver lymphoma segmentation extremely challenging. Methods: The method includes two steps: liver recognition and liver disease quantification. We use the transfer learning technique to recognize the diseased livers automatically and delineate the livers manually using the CAVASS software. When the liver is recognized, liver disease quantification is performed using the disease map model. We test our method in 10 patients with liver lymphoma. A random grouping cross-validation strategy is used to evaluate the quantification accuracy of the manual and automatic methods, with reference to the ground truth. Results: We split the 10 subjects into two groups based on lesion size. The average accuracy for the total lesion burden (TLB) quantification is 91.76%±0.093 for the group with large lesions and 95.57%±0.032 for the group with small lesions using the manual organ (MO) method. An accuracy of 85.44%±0.146 for the group with larger lesions and 81.94%±0.206 for the small lesion group is obtained using the automatic organ (AO) method, with reference to the ground truth. Conclusion: Our DQ-MO and DQ-AO methods show good performance for varied lymphoma morphologies, from homogeneous to heterogeneous, and from single to multiple lesions in one subject. Our method can also be extended to CT images of other organs in the abdomen for disease quantification, such as Kidney, Spleen and Gallbladder.
In agricultural production, weed removal is an important part of crop cultivation, but inevitably, other plants compete with crops for nutrients. Only by identifying and removing weeds can the quality of the harvest be guaranteed. Therefore, the distinction between weeds and crops is particularly important. Recently, deep learning technology has also been applied to the field of botany, and achieved good results. Convolutional neural networks are widely used in deep learning because of their excellent classification effects. The purpose of this article is to find a new method of plant seedling classification. This method includes two stages: image segmentation and image classification. The first stage is to use the improved U-Net to segment the dataset, and the second stage is to use six classification networks to classify the seedlings of the segmented dataset. The dataset used for the experiment contained 12 different types of plants, namely, 3 crops and 9 weeds. The model was evaluated by the multi-class statistical analysis of accuracy, recall, precision, and F1-score. The results show that the two-stage classification method combining the improved U-Net segmentation network and the classification network was more conducive to the classification of plant seedlings, and the classification accuracy reaches 97.7%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.