Background Automated measurement and classification models with objectivity and reproducibility are required for accurate evaluation of the breast cancer risk of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE). Purpose To develop and evaluate a machine‐learning algorithm for breast FGT segmentation and BPE classification. Study Type Retrospective. Population A total of 794 patients with breast cancer, 594 patients assigned to the development set, and 200 patients to the test set. Field Strength/Sequence 3T and 1.5T; T2‐weighted, fat‐saturated T1‐weighted (T1W) with dynamic contrast enhancement (DCE). Assessment Manual segmentation was performed for the whole breast and FGT regions in the contralateral breast. The BPE region was determined by thresholding using the subtraction of the pre‐ and postcontrast T1W images and the segmented FGT mask. Two radiologists independently assessed the categories of FGT and BPE. A deep‐learning‐based algorithm was designed to segment and measure the volume of whole breast and FGT and classify the grade of BPE. Statistical Tests Dice similarity coefficients (DSC) and Spearman correlation analysis were used to compare the volumes from the manual and deep‐learning‐based segmentations. Kappa statistics were used for agreement analysis. Comparison of area under the receiver operating characteristic (ROC) curves (AUC) and F1 scores were calculated to evaluate the performance of BPE classification. Results The mean (±SD) DSC for manual and deep‐learning segmentations was 0.85 ± 0.11. The correlation coefficient for FGT volume from manual‐ and deep‐learning‐based segmentations was 0.93. Overall accuracy of manual segmentation and deep‐learning segmentation in BPE classification task was 66% and 67%, respectively. For binary categorization of BPE grade (minimal/mild vs. moderate/marked), overall accuracy increased to 91.5% in manual segmentation and 90.5% in deep‐learning segmentation; the AUC was 0.93 in both methods. Data Conclusion This deep‐learning‐based algorithm can provide reliable segmentation and classification results for BPE. Level of Evidence 3 Technical Efficacy Stage 2
Objective To assess focal mineral deposition in the globus pallidus (GP) by CT and quantitative susceptibility mapping (QSM) of MRI scans and evaluate its clinical significance, particularly cerebrovascular degeneration. Materials and Methods This study included 105 patients (66.1 ± 13.7 years; 40 male and 65 female) who underwent both CT and MRI with available QSM data between January 2017 and December 2019. The presence of focal mineral deposition in the GP on QSM (GP QSM ) and CT (GP CT ) was assessed visually using a three-point scale. Cerebrovascular risk factors and small vessel disease (SVD) imaging markers were also assessed. The clinical and radiological findings were compared between the different grades of GP QSM and GP CT . The relationship between GP grades and cerebrovascular risk factors and SVD imaging markers was assessed using univariable and multivariable linear regression analyses. Results GP CT and GP QSM were significantly associated ( p < 0.001) but were not identical. Higher GP CT and GP QSM grades showed smaller gray matter ( p = 0.030 and p = 0.025, respectively) and white matter ( p = 0.013 and p = 0.019, respectively) volumes, as well as larger GP volumes ( p < 0.001 for both). Among SVD markers, white matter hyperintensity was significantly associated with GP CT ( p = 0.006) and brain atrophy was significantly associated with GP QSM ( p = 0.032) in at univariable analysis. In multivariable analysis, the normalized volume of the GP was independently positively associated with GP CT ( p < 0.001) and GP QSM ( p = 0.002), while the normalized volume of the GM was independently negatively associated with GP CT ( p = 0.040) and GP QSM ( p = 0.035). Conclusion Focal mineral deposition in the GP on CT and QSM might be a potential imaging marker of cerebral vascular degeneration. Both were associated with increased GP volume.
MRI is an imaging technology that non-invasively obtains high-quality medical images for diagnosis. However, MRI has the major disadvantage of long scan times which cause patient discomfort and image artifacts. As one of the methods for reducing the long scan time of MRI, the parallel MRI method for reconstructing a high-fidelity MR image from under-sampled multi-coil k-space data is widely used. In this study, we propose a method to reconstruct a high-fidelity MR image from under-sampled multi-coil k-space data using deep-learning. The proposed multi-domain Neumann network with sensitivity maps (MDNNSM) is based on the Neumann network and uses a forward model including coil sensitivity maps for parallel MRI reconstruction. The MDNNSM consists of three main structures: the CNN-based sensitivity reconstruction block estimates coil sensitivity maps from multi-coil under-sampled k-space data; the recursive MR image reconstruction block reconstructs the MR image; and the skip connection accumulates each output and produces the final result. Experiments using the fastMRI T1-weighted brain image dataset were conducted at acceleration factors of 2, 4, and 8. Qualitative and quantitative experimental results show that the proposed MDNNSM method reconstructs MR images more accurately than other methods, including the generalized autocalibrating partially parallel acquisitions (GRAPPA) method and the original Neumann network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.