Objectives: Visual assessment of the electroencephalogram by experienced clinical neurophysiologists allows reliable outcome prediction of approximately half of all comatose patients after cardiac arrest. Deep neural networks hold promise to achieve similar or even better performance, being more objective and consistent. Design: Prospective cohort study. Setting: Medical ICU of five teaching hospitals in the Netherlands. Patients: Eight-hundred ninety-five consecutive comatose patients after cardiac arrest. Interventions: None.Measurements and Main Results: Continuous electroencephalogram was recorded during the first 3 days after cardiac arrest. Functional outcome at 6 months was classified as good (Cerebral Performance Category 1-2) or poor (Cerebral Performance Category 3-5). We trained a convolutional neural network, with a VGG architecture (introduced by the Oxford Visual Geometry Group), to predict neurologic outcome at 12 and 24 hours after cardiac arrest using electroencephalogram epochs and outcome labels as inputs. Output of the network was the probability of good outcome. Data from two hospitals were used for training and internal validation (n = 661). Eighty percent of these data was used for training and cross-validation, the remaining 20% for independent internal validation. Data from the other three hospitals were used for external validation (n = 234). Prediction of poor outcome was most accurate at 12 hours, with a sensitivity in the external validation set of 58% (95% CI, 51-65%) at false positive rate of 0% (CI, 0-7%). Good outcome could be predicted at 12 hours with a sensitivity of 48% (CI, 45-51%) at a false positive rate of 5% (CI, 0-15%) in the external validation set. Conclusions: Deep learning of electroencephalogram signals outperforms any previously reported outcome predictor of coma after cardiac arrest, including visual electroencephalogram assessment by trained electroencephalogram experts. Our approach offers the potential for objective and real time, bedside insight in the neurologic prognosis of comatose patients after cardiac arrest.
Interictal Epileptiform Discharge (IED) detection in EEG signals is widely used in the diagnosis of epilepsy. Visual analysis of EEGs by experts remains the gold standard, outperforming current computer algorithms. Deep learning methods can be an automated way to perform this task. We trained a VGG network using 2-s EEG epochs from patients with focal and generalized epilepsy (39 and 40 patients, respectively, 1977 epochs total) and 53 normal controls (110770 epochs). Five-fold cross-validation was performed on the training set. Model performance was assessed on an independent set (734 IEDs from 20 patients with focal and generalized epilepsy and 23040 normal epochs from 14 controls). Network visualization techniques (filter visualization and occlusion) were applied. The VGG yielded an Area Under the ROC Curve (AUC) of 0.96 (95% Confidence Interval (CI) = 0.95 − 0.97). At 99% specificity, the sensitivity was 79% and only one sample was misclassified per two minutes of analyzed EEG. Filter visualization showed that filters from higher level layers display patches of activity indicative of IED detection. Occlusion showed that the model correctly identified IED shapes. We show that deep neural networks can reliably identify IEDs, which may lead to a fundamental shift in clinical EEG analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.