Histopathological whole slide images of haematoxylin and eosin (H&E)-stained biopsies contain valuable information with relation to cancer disease and its clinical outcomes. Still, there are no highly accurate automated methods to correlate histolopathological images with brain cancer patients' survival, which can help in scheduling patients therapeutic treatment and allocate time for preclinical studies to guide personalized treatments. We now propose a new classifier, namely, DeepSurvNet powered by deep convolutional neural networks, to accurately classify in 4 classes brain cancer patients' survival rate based on histopathological images (class I, 0-6 months; class II, 6-12 months; class III, 12-24 months; and class IV, >24 months survival after diagnosis). After training and testing of DeepSurvNet model on a public brain cancer dataset, The Cancer Genome Atlas, we have generalized it using independent testing on unseen samples. Using DeepSurvNet, we obtained precisions of 0.99 and 0.8 in the testing phases on the mentioned datasets, respectively, which shows DeepSurvNet is a reliable classifier for brain cancer patients' survival rate classification based on histopathological images. Finally, analysis of the frequency of mutations revealed differences in terms of frequency and type of genes associated to each class, supporting the idea of a different genetic fingerprint associated to patient survival. We conclude that DeepSurvNet constitutes a new artificial intelligence tool to assess the survival rate in brain cancer.
Background Glioblastoma is the most aggressive type of brain cancer with high-levels of intra- and inter-tumour heterogeneity that contribute to its rapid growth and invasion within the brain. However, a spatial characterisation of gene signatures and the cell types expressing these in different tumour locations is still lacking. Methods We have used a deep convolutional neural network (DCNN) as a semantic segmentation model to segment seven different tumour regions including leading edge (LE), infiltrating tumour (IT), cellular tumour (CT), cellular tumour microvascular proliferation (CTmvp), cellular tumour pseudopalisading region around necrosis (CTpan), cellular tumour perinecrotic zones (CTpnz) and cellular tumour necrosis (CTne) in digitised glioblastoma histopathological slides from The Cancer Genome Atlas (TCGA). Correlation analysis between segmentation results from tumour images together with matched RNA expression data was performed to identify genetic signatures that are specific to different tumour regions. Results We found that spatially resolved gene signatures were strongly correlated with survival in patients with defined genetic mutations. Further in silico cell ontology analysis along with single-cell RNA sequencing data from resected glioblastoma tissue samples showed that these tumour regions had different gene signatures, whose expression was driven by different cell types in the regional tumour microenvironment. Our results further pointed to a key role for interactions between microglia/pericytes/monocytes and tumour cells that occur in the IT and CTmvp regions, which may contribute to poor patient survival. Conclusions This work identified key histopathological features that correlate with patient survival and detected spatially associated genetic signatures that contribute to tumour-stroma interactions and which should be investigated as new targets in glioblastoma. The source codes and datasets used are available in GitHub: https://github.com/amin20/GBM_WSSM.
In recent years, improved deep learning techniques have been applied to biomedical image processing for the classification and segmentation of different tumors based on magnetic resonance imaging (MRI) and histopathological imaging (H&E) clinical information. Deep Convolutional Neural Networks (DCNNs) architectures include tens to hundreds of processing layers that can extract multiple levels of features in image-based data, which would be otherwise very difficult and time-consuming to be recognized and extracted by experts for classification of tumors into different tumor types, as well as segmentation of tumor images. This article summarizes the latest studies of deep learning techniques applied to three different kinds of brain cancer medical images (histology, magnetic resonance, and computed tomography) and highlights current challenges in the field for the broader applicability of DCNN in personalized brain cancer care by focusing on two main applications of DCNNs: classification and segmentation of brain cancer tumors images.
Glioblastoma is the most aggressive type of brain cancer with high-levels of intra- and inter-tumour heterogeneity that contribute to its rapid growth and invasion within the brain. Here, we have used a deep convolutional neural network (DCNN) as a semantic segmentation model to segment seven different tumour regions including leading-edge (LE), infiltrating tumour (IT), cellular tumour (CT), cellular tumour microvascular proliferation (CTmvp), cellular tumour pseudopalisading region around necrosis (CTpan), cellular tumour perinecrotic zones (CTpnz) and cellular tumour necrosis (CTne) in digitised glioblastoma histopathological slides from The Cancer Genome Atlas (TCGA). Analysis of segmentation results from tumour images together with matched RNA expression data identified genetic signatures that are specific to these different tumour regions. We found that spatially resolved gene signatures were strongly correlated with survival in patients with defined genetic mutations. Moreover, in silico cell ontology analysis and single-cell RNA sequencing data from resected glioblastoma tissue samples, showed that these tumour regions had different gene signatures, suggesting they are driven by different cell types in the tumour microenvironment. This points to a key role for interactions between microglia/pericytes/monocytes and tumour cells that occur in the IT and CTmvp regions, which may contribute to poor patient survival. Overall, this work identifies key histopathological features that are indicative of patient survival and detected spatially associated genetic signatures that mediate tumour-stroma interactions that should be investigated as new targets in glioblastoma. Citation Format: Amin Zadeh Shirazi, Mark D. McDonnell, Eric Fornaciari, Narjes Sadat Bagherian, Kaitlin G. Scheer, Michael S. Samuel, Mahdi Yaghoobi, Rebecca J. Ormsby, Santosh Poonnoose, Damon Tumes, Guillermo A. Gomez. A deep convolutional neural network for segmentation of whole-slide pathology images in glioblastoma [abstract]. In: Proceedings of the AACR Virtual Special Conference on Artificial Intelligence, Diagnosis, and Imaging; 2021 Jan 13-14. Philadelphia (PA): AACR; Clin Cancer Res 2021;27(5_Suppl):Abstract nr PO-004.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright 漏 2024 scite LLC. All rights reserved.
Made with 馃挋 for researchers
Part of the Research Solutions Family.