Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.
Background
Current study aims to determine the prognostic value of Multiparameter MRI after combined Lenvatinib and TACE therapy in patients with advanced unresectable hepatocellular carcinoma (HCC).
Methods
A total of 61 HCC patients with pre-treatment Multiparameter MRI in Sun Yat-sen University Cancer Center from January 2019 to March 2021 were recruited in the current study. All patients received combined Lenvatinib and TACE treatment. Potential clinical and imaging risk factors for disease progression were analyzed using Cox regression model. Each patient extracts signs from the following 7 sequences: T1WI, T1WI arterial phase, T1WI portal phase, T1WI delay phase, T2WI, DWI (b = 800), ADC.1782 quantitative 3D radiomic features were extracted for each sequence, A random forest algorithm is used to select the first 20 features by feature importance. 7 logit regression-based prediction model was built for seven sequences based on the selected features and fivefold cross validation was used to evaluate the performance of each model.
Results
CR, PR, SD were reported in 14 (23.0%), 35 (57.4%) and 7 (11.5%) patients, respectively. In multivariate analysis, tumor number (hazard ratio, HR = 4.64, 95% CI 1.03–20.88), and arterial phase intensity enhancement (HR = 0.24, 95% CI 0.09–0.64; P = 0.004) emerged as independent risk factors for disease progression. In addition to clinical factors, the radiomics signature enhanced the accuracy of the clinical model in predicting disease progression, with an AUC of 0.71, a sensitivity of 0.99%, and a specificity of 0.95.
Conclusion
Radiomic signatures derived from pretreatment MRIs could predict response to combined Lenvatinib and TACE therapy. Furthermore, it can increase the accuracy of a combined model for predicting disease progression. In order to improve clinical outcomes, clinicians may use this to select an optimal treatment strategy and develop a personalized monitoring protocol.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.