Abstract. Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve ðAUCÞ ¼ 0.81]. Further, the performance of ensemble classifiers based on both types was significantly better than that of either classifier type alone (AUC ¼ 0.86 versus 0.81, p ¼ 0.022). We conclude that transfer learning can improve current CADx methods while also providing standalone classifiers without large datasets, facilitating machine-learning methods in radiomics and precision medicine.
Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing. We present a methodology that extracts and pools low- to mid-level features using a pre-trained convolutional neural network and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our fusion-based method demonstrates significant improvements to previous breast cancer CADx methods across three clinical imaging modalities (dynamic contrast-enhanced MRI, full-field digital mammography, and ultrasound) in terms of predictive performance in the task of estimating lesion malignancy. Further, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.
Routine asymptomatic testing strategies for COVID-19 have been proposed to prevent outbreaks in high-risk healthcare environments. We used simulation modeling to evaluate the optimal frequency of viral testing. We found that routine testing substantially reduces risk of outbreaks, but may need to be as frequent as twice weekly.
Purpose: To assess the performance of using transferred features from pre‐trained deep convolutional networks (CNNs) in the task of classifying cancer in breast ultrasound images, and to compare this method of transfer learning with previous methods involving human‐designed features. Methods: A breast ultrasound dataset consisting of 1125 cases and 2393 regions of interest (ROIs) was used. Each ROI was labeled as cystic, benign, or malignant. Features were extracted from each ROI using pre‐trained CNNs and used to train support vector machine (SVM) classifiers in the tasks of distinguishing non‐malignant (benign+cystic) vs malignant lesions and benign vs malignant lesions. For a baseline comparison, classifiers were also trained on prior analytically‐extracted tumor features. Five‐fold cross‐validation (by case) was conducted with the area under the receiver operating characteristic curve (AUC) as the performance metric. Results: Classifiers trained on CNN‐extracted features were comparable to classifiers trained on human‐designed features. In the non‐malignant vs malignant task, both the SVM trained on CNN‐extracted features and the SVM trained on human‐designed features obtained an AUC of 0.90. In the task of determining benign vs malignant, the SVM trained on CNN‐extracted features obtained an AUC of 0.88, compared to the AUC of 0.85 obtained by the SVM trained on human‐designed features. Conclusion: We obtained strong results using transfer learning to characterize ultrasound breast cancer images. This method allows us to directly classify a small dataset of lesions in a computationally inexpensive fashion without any manual input. Modern deep learning methods in computer vision are contingent on large datasets and vast computational resources, which are often inaccessible for clinical applications. Consequently, we believe transfer learning methods will be important for computer‐aided diagnosis schemes in order to utilize advancements in deep learning and computer vision without the associated costs. This work was partially funded by NIH grant U01 CA195564 and the University of Chicago Metcalf program. M.L.G. is a stockholder in R2/Hologic, co‐founder and equity holder in Quantitative Insights, and receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba. K.D. received royalties from Hologic.
To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text]] and RTA ([Formula: see text]; [Formula: see text]) in distinguishing BRCA1/2 carriers and low-risk women. However, in distinguishing unilateral cancer patients and low-risk women, performance was significantly greater with CNN ([Formula: see text]; [Formula: see text]) compared to RTA ([Formula: see text]; [Formula: see text]). Fusion classifiers performed significantly better than the RTA-alone classifiers with AUC values of 0.86 and 0.84 in differentiating BRCA1/2 carriers from low-risk women and unilateral cancer patients from low-risk women, respectively. In conclusion, deep learning extracted parenchymal characteristics from FFDMs performed as well as, or better than, conventional texture analysis in the task of distinguishing between cancer risk populations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.