Hand bone age, as the biological age of humans, can accurately reflect the development level and maturity of individuals. Bone age assessment results of adolescents can provide a theoretical basis for their growth and development and height prediction. In this study, a deep convolutional neural network (CNN) model based on fine-grained image classification is proposed, using a hand bone image dataset provided by the Radiological Society of North America (RSNA) as the research object. This model can automatically locate informative regions and extract local features in the process of hand bone image recognition, and then, the extracted local features are combined with global features of a complete image for bone age classification. This method can achieve end-to-end bone age assessment without any image annotation information (except bone age tags), improving the speed and accuracy of bone age assessment. Experimental results show that the proposed method achieves 66.38% and 68.63% recognition accuracy of males and females on the RSNA dataset, and the mean absolute errors are 3.71 ± 7.55 and 3.81 ± 7.74 months for males and females, respectively. The test time for each image is approximately 35 ms. This method achieves good performance and outperforms existing methods in bone age assessment based on weakly supervised fine-grained image classification.INDEX TERMS Bone age assessment, Deep learning, Convolutional neural network, Fine-grained image.
Plant pests are the primary biological threats to agricultural and forestry production as well as forest ecosystem. Monitoring forest-pest damage via satellite images is crucial for the development of prevention and control strategies. Previous studies utilizing deep learning to monitor pest-infested damage in satellite imagery adopted RGB images, while multispectral imagery and vegetation indices were not used. Multispectral images and vegetation indices contain a wealth of useful information for detecting plant health, which can improve the precision of pest damage detection. The aim of the study is to further improve forest-pest infestation area segmentation by combining multispectral, vegetation indices and RGB information into deep learning. We also propose a new image segmentation method based on UNet++ with attention mechanism module for detecting forest damage induced by bark beetle and aspen leaf miner in Sentinel-2 images. The ResNeSt101 is used as the feature extraction backbone, and the attention mechanism scSE module is introduced in the decoding phase for improving the image segmentation results. We used Sentinel-2 imagery to produce a dataset based on forest health damage data gathered by the Ministry of Forests, Lands, Natural Resource Operations and Rural Development (FLNRORD) in British Columbia (BC), Canada, during aerial overview surveys (AOS) in 2020. The dataset contains the 11 original Sentinel-2 bands and 13 vegetation indices. The experimental results confirmed that the significance of vegetation indices and multispectral data in enhancing the segmentation effect. The results demonstrated that the proposed method exhibits better segmentation quality and more accurate quantitative indices with overall accuracy of 85.11%, in comparison with the state-of-the-art pest area segmentation methods.
Aim: This study aimed to automatically implement liver disease quantification (DQ) in lymphoma using CT images without lesion segmentation. Background: Computed Tomography (CT) imaging manifestations of liver lymphoma include diffuse infiltration, blurred boundaries, vascular drift signs, and multiple lesions, making liver lymphoma segmentation extremely challenging. Methods: The method includes two steps: liver recognition and liver disease quantification. We use the transfer learning technique to recognize the diseased livers automatically and delineate the livers manually using the CAVASS software. When the liver is recognized, liver disease quantification is performed using the disease map model. We test our method in 10 patients with liver lymphoma. A random grouping cross-validation strategy is used to evaluate the quantification accuracy of the manual and automatic methods, with reference to the ground truth. Results: We split the 10 subjects into two groups based on lesion size. The average accuracy for the total lesion burden (TLB) quantification is 91.76%±0.093 for the group with large lesions and 95.57%±0.032 for the group with small lesions using the manual organ (MO) method. An accuracy of 85.44%±0.146 for the group with larger lesions and 81.94%±0.206 for the small lesion group is obtained using the automatic organ (AO) method, with reference to the ground truth. Conclusion: Our DQ-MO and DQ-AO methods show good performance for varied lymphoma morphologies, from homogeneous to heterogeneous, and from single to multiple lesions in one subject. Our method can also be extended to CT images of other organs in the abdomen for disease quantification, such as Kidney, Spleen and Gallbladder.
In agricultural production, weed removal is an important part of crop cultivation, but inevitably, other plants compete with crops for nutrients. Only by identifying and removing weeds can the quality of the harvest be guaranteed. Therefore, the distinction between weeds and crops is particularly important. Recently, deep learning technology has also been applied to the field of botany, and achieved good results. Convolutional neural networks are widely used in deep learning because of their excellent classification effects. The purpose of this article is to find a new method of plant seedling classification. This method includes two stages: image segmentation and image classification. The first stage is to use the improved U-Net to segment the dataset, and the second stage is to use six classification networks to classify the seedlings of the segmented dataset. The dataset used for the experiment contained 12 different types of plants, namely, 3 crops and 9 weeds. The model was evaluated by the multi-class statistical analysis of accuracy, recall, precision, and F1-score. The results show that the two-stage classification method combining the improved U-Net segmentation network and the classification network was more conducive to the classification of plant seedlings, and the classification accuracy reaches 97.7%.
Stomata are the main medium of water exchange in plants, regulating gas exchange and responsible for the processes of photosynthesis and transpiration. Stomata are surrounded by guard cells and the transpiration rate is controlled by opening and closing stomata. Stomatal state (open and close) plays an important role in describing the health of plants. In addition, counting the number of stomata is of great significance for scientists to study the number of opening and closeing stomata and to measure their density and distribution on the leaf surface through different sampling techniques. Although some techniques for calculating the number of stomata have been proposed, these methods are used to produce samples in isolation and then to identify and classify the states in the sample leaves. We improved YOLO-X and then implemented a transfer learning method to count the number of stomata and identify the stomatal opening and closing status of live black poplar leaves. In the end, the average accuracy and recall of the method were 98.3% and 95.9%, which helped researchers to obtain accurate information on leaf stomatal opening and closing status in an efficient and simple way.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.