We present a method for classifying target sleep arousal regions of polysomnographies. Time-and frequencydomain features of clinical and statistical origins were derived from the polysomnography signals and the features fed into a Bidirectional Recurrent Neural Network, using Long Short-Term Memory units (BRNN-LSTM). The predictions of five recurrent neural networks, trained using different features and training sets, were averaged for each sample, to yield a more robust classifier. The proposed method was developed and validated on the PhysioNet Challenge dataset which consisted of a training set of 994 subjects and a hidden test set of 989 subjects. Five-fold cross-validation on the training set resulted in an area under precision-recall curve (AUPRC) score of 0.452, an area under receiver operating characteristic curve (AUROC) score of 0.901 and intraclass correlation ICC(2,1) of 0.59. The classifier was further validated on the PhysioNet Challenge test set, resulting in an AUPRC score of 0.45.
Introduction
Sleep stage classifications are of central importance when diagnosing various sleep-related diseases. Performing a full PSG recording can be time-consuming and expensive, and often requires an overnight stay at a sleep clinic. Furthermore, the manual sleep staging process is tedious and subject to scorer variability.
Here we present an end-to-end deep learning approach to robustly classify sleep stages from Self Applied Somnography (SAS) studies with frontal EEG and EOG signals. This setup allows patients to self-administer EEG and EOG leads in a home sleep study, which reduces cost and is more convenient for the patients. However, self-administration of the leads increases the risk of loose electrodes, which the algorithm must be robust to. The model structure was inspired by ResNet (He, Zhang, Ren, Sun, 2015), which has been highly successful in image recognition tasks. The ResTNet is comprised of the characteristic Residual blocks with an added Temporal component.
Methods
The ResTNet classifies sleep stages from the raw signals using convolutional neural network (CNN) layers, which avoids manual feature extraction, residual blocks, and a gated recurrent unit (GRU). This significantly reduces sleep stage prediction time and allows the model to learn more complex relations as the size of the training data increases.
The model was developed and validated on over 400 manually scored sleep studies using the novel SAS setup. In developing the model, we used data augmentation techniques to simulate loose electrodes and distorted signals to increase model robustness with regards to missing signals and low quality data.
Results
The study shows that applying the robust ResTNet model to SAS studies gives accuracy > 0.80 and F1-score > 0.80. It outperforms our previous model which used hand-crafted features and achieves similar performance to a human scorer.
Conclusion
The ResTNet is fast, gives accurate predictions, and is robust to loose electrodes. The end-to-end model furthermore promises better performance with more data. Combined with the simplicity of the SAS setup, it is an attractive option for large-scale sleep studies.
Support
This work was supported by the Icelandic Centre for Research RANNÍS (175256-0611).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.