Contrast-enhanced spectral mammography (CESM) is an effective tool for diagnosing breast cancer with the benefit of its multiple types of images. However, few models simultaneously utilize this feature in deep learning-based breast cancer classification methods. To combine multiple features of CESM and thus aid physicians in making accurate diagnoses, we propose a hybrid approach by taking advantages of both fusion and classification models. Methods: We evaluated the proposed method on a CESM dataset obtained from 95 patients between ages ranging from 21 to 74 years, with a total of 760 images. The framework consists of two main parts: a generative adversarial network based image fusion module and a Res2Net-based classification module. The aim of the fusion module is to generate a fused image that combines the characteristics of dual-energy subtracted (DES) and low-energy (LE) images, and the classification module is developed to classify the fused image into benign or malignant. Results: Based on the experimental results, the fused images contained complementary information of the images of both types (DES and LE), whereas the model for classification achieved accurate classification results. In terms of qualitative indicators, the entropy of the fused images was 2.63, and the classification model achieved an accuracy of 94.784%, precision of 95.016%, recall of 95.912%, specificity of 0.945, F1_score of 0.955, and area under curve of 0.947 on the test dataset, respectively. Conclusions: We conducted extensive comparative experiments and analyses on our in-house dataset, and demonstrated that our method produces promising results in the fusion of CESM images and is more accurate than the state-ofthe-art methods in classification of fused CESM.
Background. To explore the clinical value of enhanced computed tomography (enhanced CT), magnetic resonance imaging (MRI), carcinoembryonic antigen (CEA), and cancer antigen 199 (CA199) in the diagnosis of rectal cancer (RC). Methods. A total of 156 patients with RC confirmed by postoperative pathology admitted to the Affiliated Yantai Yuhuangding Hospital of Qingdao University from March 2018 to November 2020 were included in the malignant group, and 52 patients with chronic proctitis in the benign control group. All patients underwent preoperative enhanced CT, MRI scans, and serum CEA and CA199 tests. The accuracy, sensitivity, and specificity of single and combined enhanced CT, MRI, CEA, and CA199 tests for the clinical staging of RC were calculated. Results. The postoperative pathological diagnosis showed that 35 cases of 156 RC patients were at T1 stage, 29 cases were at T2 stage, 24 cases were at T3 stage, 11 cases were at T4 stage, 23 cases were at N0 stage, 21 cases were at N1 stage, 8 cases were at N2 stage, 3 cases were at M0 stage, and 2 cases were at M1 stage. The positive rate of MRI in the diagnosis of RC was higher than that of enhanced CT. Serum CEA and CA199 levels in the malignant group were significantly increased compared with the benign group. The sensitivity, specificity, and accuracy of the combined detection were significantly higher than those of the single detection. Conclusion. Compared with enhanced CT, MRI has a higher detection rate of T and N stage in patients with RC. Combined enhanced CT, MRI, CEA, and CA199 can provide more accurate diagnosis and preoperative staging of RC patients.
In computer-aided diagnosis methods for breast cancer, deep learning has been shown to be an effective method to distinguish whether lesions are present in tissues. However, traditional methods only classify masses as benign or malignant, according to their presence or absence, without considering the contextual features between them and their adjacent tissues. Furthermore, for contrast-enhanced spectral mammography, the existing studies have only performed feature extraction on a single image per breast. In this paper, we propose a multi-input deep learning network for automatic breast cancer classification. Specifically, we simultaneously input four images of each breast with different feature information into the network. Then, we processed the feature maps in both horizontal and vertical directions, preserving the pixel-level contextual information within the neighborhood of the tumor during the pooling operation. Furthermore, we designed a novel loss function according to the information bottleneck theory to optimize our multi-input network and ensure that the common information in the multiple input images could be fully utilized. Our experiments on 488 images (256 benign and 232 malignant images) from 122 patients show that the method’s accuracy, precision, sensitivity, specificity, and f1-score values are 0.8806, 0.8803, 0.8810, 0.8801, and 0.8806, respectively. The qualitative, quantitative, and ablation experiment results show that our method significantly improves the accuracy of breast cancer classification and reduces the false positive rate of diagnosis. It can reduce misdiagnosis rates and unnecessary biopsies, helping doctors determine accurate clinical diagnoses of breast cancer from multiple CESM images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.