Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations.Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders.However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (https://refuge.grand-challenge.org), held in conjunction with MICCAI 2018. The challenge consisted * Corresponding authors: Yanwu Xu (ywxu@ieee.org) and Xiulan Zhang (
Recent studies suggest that combined analysis of Magnetic resonance imaging (MRI) that measures brain atrophy and positron emission tomography (PET) that quantifies hypo-metabolism provides improved accuracy in diagnosing Alzheimer's disease. However, such techniques are limited by the availability of corresponding scans of each modality. Current work focuses on a cross-modal approach to estimate FDG-PET scans for the given MR scans using a 3D U-Net architecture. The use of the complete MR image instead of a local patch based approach helps in capturing non-local and non-linear correlations between MRI and PET modalities. The quality of the estimated PET scans is measured using quantitative metrics such as MAE, PSNR and SSIM. The efficacy of the proposed method is evaluated in the context of Alzheimer's disease classification. The accuracy using only MRI is 70.18% while joint classification using synthesized PET and MRI is 74.43% with a p-value of 0.06. The significant improvement in diagnosis demonstrates the utility of the synthesized PET scans for multi-modal analysis.
Electroencephalogram (EEG) microstates that represent quasi-stable, global neuronal activity are considered as the building blocks of brain dynamics. Therefore, the analysis of microstate sequences is a promising approach to understand fast brain dynamics that underlie various mental processes. Recent studies suggest that EEG microstate sequences are non-Markovian and nonstationary, highlighting the importance of the sequential flow of information between different brain states. These findings inspired us to model these sequences using Recurrent Neural Networks (RNNs) consisting of long-short-term-memory (LSTM) units to capture the complex temporal dependencies.Using an LSTM-based auto encoder framework and different encoding schemes, we modeled the microstate sequences at multiple time scales (200-2,000 ms) aiming to capture stably recurring microstate patterns within and across subjects. We show that RNNs can learn underlying microstate patterns with high accuracy and that the microstate trajectories are subject invariant at shorter time scales (≤400 ms) and reproducible across sessions. Significant drop in the reconstruction accuracy was observed for longer sequence lengths of 2,000 ms. These findings indirectly corroborate earlier studies which indicated that EEG microstate sequences exhibit long-range dependencies with finite memory content. Furthermore, we find that the latent representations learned by the RNNs are sensitive to external stimulation such as stress while the conventional univariate microstate measures (e.g., occurrence, mean duration, etc.) fail to capture such changes in brain dynamics. While RNNs cannot be configured to identify the specific discriminating patterns, they have the potential for learning the underlying temporal dynamics and are sensitive to sequence aberrations characterized by changes in metal processes. Empowered with the macroscopic understanding of the temporal dynamics that extends beyond short-term interactions, RNNs offer a reliable alternative for exploring system level brain dynamics using EEG microstate sequences.
K E Y W O R D SEEG, microstates, recurrent neural networks, stress
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.