Chinese Herbal Slices (CHS) are critical components of Traditional Chinese Medicine (TCM); the accurate recognition of CHS is crucial for applying to medicine, production, and education. However, existing methods to recognize the CHS are mainly performed by experienced professionals, which may not meet vast CHS market demand due to time-consuming and the limited number of professionals. Although some automated CHS recognition approaches have been proposed, the performance still needs further improvement because they are primarily based on the traditional machine learning with hand-crafted features, resulting in relatively low accuracy. Additionally, few CHS datasets are available for research aimed at practical application. To comprehensively address these problems, we propose a combined channel attention and spatial attention module network (CCSM-Net) for efficiently recognizing CHS with 2-D images. The CCSM-Net integrates channel and spatial attentions, focusing on the most important information as well as the position of the information of CHS image. Especially, pairs of max-pooling and average pooling operations are used in the CA and SA module to aggregate the channel information of the feature map. Then, a dataset of 14,196 images with 182 categories of commonly used CHS is constructed. We evaluated our framework on the constructed dataset. Experimental results show that the proposed CCSM-Net indicates promising performance and outperforms other typical deep learning algorithms, achieving a recognition rate of 99.27%, a precision of 99.33%, a recall of 99.27%, and an F1-score of 99.26% with different numbers of CHS categories.
Extracting retinal vessels accurately is very important for diagnosing some diseases such as diabetes retinopathy, hypertension, and cardiovascular. Clinically, experienced ophthalmologists diagnose these diseases through segmenting retinal vessels manually and analysing its structural feature, such as tortuosity and diameter. However, manual segmentation of retinal vessels is a time-consuming and laborious task with strong subjectivity. The automatic segmentation technology of retinal vessels can not only reduce the burden of ophthalmologists but also effectively solve the problem that is a lack of experienced ophthalmologists in remote areas. Therefore, the automatic segmentation technology of retinal vessels is of great significance for clinical auxiliary diagnosis and treatment of ophthalmic diseases. A method using SegNet is proposed in this paper to improve the accuracy of the retinal vessel segmentation. The performance of the retinal vessel segmentation model with SegNet is evaluated on the three public datasets (DRIVE, STARE, and HRF) and achieved accuracy of 0.9518, 0.9683, and 0.9653, sensitivity of 0.7580, 0.7747, and 0.7070, specificity of 0.9804, 0.9910, and 0.9885, F 1 score of 0.7992, 0.8369, and 0.7918, MCC of 0.7749, 0.8227, and 0.7643, and AUC of 0.9750, 0.9893, and 0.9740, respectively. The experimental results showed that the method proposed in this research presented better results than many classical methods studied and may be expected to have clinical application prospects.
Major Depressive Disorder (MDD) is the most prevalent psychiatric disorder, seriously affecting people’s quality of life. Manually identifying MDD from structural magnetic resonance imaging (sMRI) images is laborious and time-consuming due to the lack of clear physiological indicators. With the development of deep learning, many automated identification methods have been developed, but most of them stay in 2D images, resulting in poor performance. In addition, the heterogeneity of MDD also results in slightly different changes reflected in patients’ brain imaging, which constitutes a barrier to the study of MDD identification based on brain sMRI images. We propose an automated MDD identification framework in sMRI data (3D FRN-ResNet) to comprehensively address these challenges, which uses 3D-ResNet to extract features and reconstruct them based on feature maps. Notably, the 3D FRN-ResNet fully exploits the interlayer structure information in 3D sMRI data and preserves most of the spatial details as well as the location information when converting the extracted features into vectors. Furthermore, our model solves the feature map reconstruction problem in closed form to produce a straightforward and efficient classifier and dramatically improves model performance. We evaluate our framework on a private brain sMRI dataset of MDD patients. Experimental results show that the proposed model exhibits promising performance and outperforms the typical other methods, achieving the accuracy, recall, precision, and F1 values of 0.86776, 0.84237, 0.85333, and 0.84781, respectively.
Intracranial tumors are commonly known as brain tumors, which can be life-threatening in severe cases. Magnetic resonance imaging (MRI) is widely used in diagnosing brain tumors because of its harmless to the human body and high image resolution. Due to the heterogeneity of brain tumor height, MRI imaging is exceptionally irregular. How to accurately and quickly segment brain tumor MRI images is still one of the hottest topics in the medical image analysis community. However, according to the brain tumor segmentation algorithms, we could find now, most segmentation algorithms still stay in two-dimensional (2D) image segmentation, which could not obtain the spatial dependence between features effectively. In this study, we propose a brain tumor automatic segmentation method called scSE-NL V-Net. We try to use three-dimensional (3D) data as the model input and process the data by 3D convolution to get some relevance between dimensions. Meanwhile, we adopt non-local block as the self-attention block, which can reduce inherent image noise interference and make up for the lack of spatial dependence due to convolution. To improve the accuracy of convolutional neural network (CNN) image recognition, we add the “Spatial and Channel Squeeze-and-Excitation” Network (scSE-Net) to V-Net. The dataset used in this paper is from the brain tumor segmentation challenge 2020 database. In the test of the official BraTS2020 verification set, the Dice similarity coefficient is 0.65, 0.82, and 0.76 for the enhanced tumor (ET), whole tumor (WT), and tumor core (TC), respectively. Thereby, our model can make an auxiliary effect on the diagnosis of brain tumors established.
To overcome the limitations of conventional breast screening methods based on digital mammography, a quasi-3D imaging technique, digital breast tomosynthesis (DBT) has been developed in the field of breast cancer screening in recent years. In this work, a computer-aided architecture for mass regions segmentation in DBT images using a dilated deep convolutional neural network (DCNN) is developed. First, to improve the low contrast of breast tumour candidate regions and depress the background tissue noise in the DBT image effectively, the constraint matrix is established after top-hat transformation and multiplied with the DBT image. Second, input image patches are generated, and the data augmentation technique is performed to create the training data set for training a dilated DCNN architecture. Then the mass regions in DBT images are preliminarily segmented; each pixel is divided into two different kinds of labels. Finally, the postprocessing procedure removes all false-positives regions with less than 50 voxels. The final segmentation results are obtained by smoothing the boundaries of the mass regions with a median filter. The testing accuracy (ACC), sensitivity (SEN), and the area under the receiver operating curve (AUC) are adopted as the evaluation metrics, and the ACC, SEN, as well as AUC are 86.3%, 85.6%, and 0.852 for segmenting the mass regions in DBT images on the entire data set, respectively. The experimental results indicate that our proposed approach achieves promising results compared with other classical CAD-based frameworks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.