Glioblastoma is recognized as World Health Organization (WHO) grade IV glioma with an aggressive growth pattern. The current clinical practice in diagnosis and prognosis of Glioblastoma using MRI involves multiple steps including manual tumor sizing. Accurate identification and segmentation of multiple abnormal tissues within tumor volume in MRI is essential for precise survival prediction. Manual tumor and abnormal tissue detection and sizing are tedious, and subject to inter-observer variability. Consequently, this work proposes a fully automated MRI-based glioblastoma and abnormal tissue segmentation, and survival prediction framework. The framework includes radiomics feature-guided deep neural network methods for tumor tissue segmentation; followed by survival regression and classification using these abnormal tumor tissue segments and other relevant clinical features. The proposed multiple abnormal tumor tissue segmentation step effectively fuses feature-based and feature-guided deep radiomics information in structural MRI. The survival prediction step includes two representative survival prediction pipelines that combine different feature selection and regression approaches. The framework is evaluated using two recent widely used benchmark datasets from Brain Tumor Segmentation (BraTS) global challenges in 2017 and 2018. The best overall survival pipeline in the proposed framework achieves leave-one-out cross-validation (LOOCV) accuracy of 0.73 for training datasets and 0.68 for validation datasets, respectively. These training and validation accuracies for tumor patient survival prediction are among the highest reported in literature. Finally, a critical analysis of radiomics features and efficacy of these features in segmentation and survival prediction performance is presented as lessons learned.
Facial expression recognition is a challenging task that involves detection and interpretation of complex and subtle changes in facial muscles. Recent advances in feed-forward deep neural networks (DNNs) have offered improved object recognition performance. Sparse feature learning in feed-forward DNN models offers further improvement in performance when compared to the earlier handcrafted techniques. However, the depth of the feed-forward DNNs and the computational complexity of the models increase proportional to the challenges posed by the facial expression recognition problem. The feed-forward DNN architectures do not exploit another important learning paradigm, known as recurrency, which is ubiquitous in the human visual system. Consequently, this paper proposes a novel biologically relevant sparse-deep simultaneous recurrent network (S-DSRN) for robust facial expression recognition. The feature sparsity is obtained by adopting dropout learning in the proposed DSRN as opposed to usual handcrafting of additional penalty terms for the sparse representation of data. Theoretical analysis of S-DSRN shows that the dropout learning offers desirable properties such as sparsity, and prevents the model from overfitting. Experimental results also suggest that the proposed method yields better performance accuracy, requires reduced number of parameters, and offers reduced computational complexity than that of the previously reported state-of-the-art feed-forward DNNs using two of the most widely used publicly available facial expression data sets. Furthermore, we show that by combining the proposed neural architecture with a state-of-the-art metric learning technique significantly improves the overall recognition performance. Finally, a graphical processing unit (GPU)-based implementation of S-DSRN is obtained for real-time applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.