In esophageal cancer, few prediction tools can be confidently used in current clinical practice. We developed a deep convolutional neural network (CNN) with 798 positron emission tomography (PET) scans of esophageal squamous cell carcinoma and 309 PET scans of stage I lung cancer. In the first stage, we pretrained a 3D-CNN with all PET scans for a task to classify the scans into esophageal cancer or lung cancer. Overall, 548 of 798 PET scans of esophageal cancer patients were included in the second stage with an aim to classify patients who expired within or survived more than one year after diagnosis. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. In the pretrain model, the deep CNN attained an AUC of 0.738 in identifying patients who expired within one year after diagnosis. In the survival analysis, patients who were predicted to be expired but were alive at one year after diagnosis had a 5-year survival rate of 32.6%, which was significantly worse than the 5-year survival rate of the patients who were predicted to survive and were alive at one year after diagnosis (50.5%, p < 0.001). These results suggest that the prediction model could identify tumors with more aggressive behavior. In the multivariable analysis, the prediction result remained an independent prognostic factor (hazard ratio: 2.830; 95% confidence interval: 2.252–3.555, p < 0.001). We conclude that a 3D-CNN can be trained with PET image datasets to predict esophageal cancer outcome with acceptable accuracy.
Understanding factors that impact prognosis for cancer patients have high clinical relevance for treatment decisions and monitoring of the disease outcome. Advances in artificial intelligence (AI) and digital pathology offer an exciting opportunity to capitalize on the use of whole slide images (WSIs) of hematoxylin and eosin (H&E) stained tumor tissue for objective prognosis and prediction of response to targeted therapies. AI models often require hand-delineated annotations for effective training which may not be readily available for larger data sets. In this study, we investigated whether AI models can be trained without region-level annotations and solely on patient-level survival data. We present a weakly supervised survival convolutional neural network (WSS-CNN) approach equipped with a visual attention mechanism for predicting overall survival. The inclusion of visual attention provides insights into regions of the tumor microenvironment with the pathological interpretation which may improve our understanding of the disease pathomechanism. We performed this analysis on two independent, multi-center patient data sets of lung (which is publicly available data) and bladder urothelial carcinoma. We perform univariable and multivariable analysis and show that WSS-CNN features are prognostic of overall survival in both tumor indications. The presented results highlight the significance of computational pathology algorithms for predicting prognosis using H&E stained images alone and underpin the use of computational methods to improve the efficiency of clinical trial studies.
Background Both lymphovascular invasion, which is characterized by penetration of tumor cells into the peritumoural vascular or lymphatic network, and perineural invasion, which is characterized by involvement of tumor cells surrounding nerve fibers, are considered as an important step for tumor spreading, and are known poor prognostic factors in esophageal cancer. However, the information of these histological features is unavailable until pathological examination of surgical resected specimens. We aim to predict the presence or absence of these factors by positron emission tomography images during staging workup. Methods The positron emission tomography images before treatment and pathological reports of 278 patients who underwent esophagectomy for squamous cell carcinoma were collected. Stepwise convolutional neural network was constructed to distinguish patient with either lymphovascular invasion or perineural invasion from those without. Results Randomly selected 248 patients were included in the testing set. Stepwise approach was used in training our custom neural network. The performance of fine-tuned neural network was tested in another independent 30 patients. The accuracy rate of predicting the presence or absence of either lymphovascular invasion or perineural invasion was 66.7% (20 of 30 were accurate). Conclusion Using pre-treatment positron emission tomography images alone to predict the presence of absence of poor prognostic histological factors, i.e. lymphovascular invasion or perineural invasion, with deep convolutional neural network is possible. The technique of deep learning may identify patients with poor prognosis and enable personalized medicine in esophageal cancer. Disclosure All authors have declared no conflicts of interest.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.