Background
Radiomics has shown promising results in the diagnosis, efficacy, and prognostic assessments of multiple myeloma (MM). However, little evidence exists on the utility of radiomics in predicting a high‐risk cytogenetic (HRC) status in MM.
Purpose
To develop and test a magnetic resonance imaging (MRI)‐based radiomics model for predicting an HRC status in MM patients.
Study Type
Retrospective.
Population
Eighty‐nine MM patients (HRC [n: 37] and non‐HRC [n: 52]).
Field Strength/Sequence
A 3.0 T; fast spin‐echo (FSE): T1‐weighted image (T1WI) and fat‐suppression T2WI (FS‐T2WI).
Assessment
Overall, 1409 radiomics features were extracted from each volume of interest drawn by radiologists. Three sequential feature selection steps—variance threshold, SelectKBest, and least absolute shrinkage selection operator—were repeated 10 times with 5‐fold cross‐validation. Radiomics models were constructed with the top three frequency features of T1WI/T2WI/two‐sequence MRI (T1WI and FS‐T2WI). Radiomics models, clinical data (age and visually assessed MRI pattern), or radiomics combined with clinical data were used with six classifiers to distinguish between HRC and non‐HRC statuses. Six classifiers used were support vector machine, random forest, logistic regression (LR), decision tree, k‐nearest neighbor, and XGBoost. Model performance was evaluated with area under the curve (AUC) values.
Statistical Tests
Mann–Whitney U‐test, Chi‐squared test, Z test, and DeLong method.
Results
The LR classifier performed better than the other classifiers based on different data (AUC: 0.65–0.82; P < 0.05). The two‐sequence MRI models performed better than the other data models using different classifiers (AUC: 0.68–0.82; P < 0.05). Thus, the LR two‐sequence model yielded the best performance (AUC: 0.82 ± 0.02; sensitivity: 84.1%; specificity: 68.1%; accuracy: 74.7%; P < 0.05).
Conclusion
The LR‐based machine learning method appears superior to other classifier methods for assessing HRC in MM. Radiomics features based on two‐sequence MRI showed good performance in differentiating HRC and non‐HRC statuses in MM.
Evidence Level
3
Technical Efficacy
Stage 2
Background:The diagnosis of labral injury on MRI is time-consuming and potential for incorrect diagnoses. Purpose: To explore the feasibility of applying deep learning to diagnose and classify labral injuries with MRI. Study Type: Retrospective. Population: A total of 1016 patients were divided into normal (n = 168, class 0) and abnormal labrum (n = 848) groups. The abnormal group consisted of n = 111 with class 1 (degeneration), n = 437 with class 2 (partial or complete tear), and n = 300 with unclassified injury. Patients were randomly divided into training, validation, and test cohort according to the ratio of 55%:15%:30%. Field Strength/Sequence: Fat-saturation proton density-weighted fast spin-echo sequence at 3.0 T. Assessment: Convolutional neural network-6 (CNN-6) was used to extract, discriminate, and detect oblique coronal (OCOR) and oblique sagittal (OSAG) images. Mask R-CNN was used for segmentation. LeNet-5 was used to diagnose and classify labral injuries. The weighting method combined the models of OCOR and OSAG. The output-input connection was used to correlate the whole diagnosis/classification system. Four radiologists performed subjective diagnoses to obtain the diagnosis results. Statistical Tests: CNN-6 and LeNet-5 were evaluated by area under the receiver operating characteristic (ROC) curve and related parameters. The mean average precision (MAP) evaluated the Mask R-CNN. McNemar's test was used to compare the radiologists and models. A P value < 0.05 was considered statistically significant. Results: The area under the curve (AUC) of CNN-6 was 0.99 for extraction, discrimination, and detection. MAP values of Mask R-CNN for OCOR and OSAG image segmentation were 0.96 and 0.99. The accuracies of LeNet-5 in the diagnosis and classification were 0.94/0.94 (OCOR) and 0.92/0.91 (OSAG), respectively. The accuracy of the weighted models in the diagnosis and classification were 0.94 and 0.97, respectively. The accuracies of radiologists in the diagnosis and classification of labrum injuries ranged from 0.85 to 0.92 and 0.78 to 0.94, respectively. Data Conclusion: Deep learning can assist radiologists in diagnosing and classifying labrum injuries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.