Glioblastoma is a WHO grade IV brain tumor, which leads to poor overall survival (OS) of patients. For precise surgical and treatment planning, OS prediction of glioblastoma (GBM) patients is highly desired by clinicians and oncologists. Radiomic research attempts at predicting disease prognosis, thus providing beneficial information for personalized treatment from a variety of imaging features extracted from multiple MR images. In this study, first-order, intensity-based volume and shape-based and textural radiomic features are extracted from fluid-attenuated inversion recovery (FLAIR) and T1ce MRI data. The region of interest is further decomposed with stationary wavelet transform with low-pass and high-pass filtering. Further, radiomic features are extracted on these decomposed images, which helped in acquiring the directional information. The efficiency of the proposed algorithm is evaluated on Brain Tumor Segmentation (BraTS) challenge training, validation, and test datasets. The proposed approach achieved 0.695, 0.571, and 0.558 on BraTS training, validation, and test datasets. The proposed approach secured the third position in BraTS 2018 challenge for the OS prediction task.
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of
n
= 3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel “modality-agnostic training” technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach
1
obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors.
Detecting various types of cells in and aroundthe tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.