Objective: This study classifies sleep stages from a single lead electrocardiogram (ECG) using beat detection, cardiorespiratory coupling in the time-frequency domain and a deep convolutional neural network (CNN). Approach: An ECG-derived respiration (EDR) signal and synchronous beat-to-beat heart rate variability (HRV) time series were derived from the ECG using previously described robust algorithms. A measure of cardiorespiratory coupling (CRC) was extracted by calculating the coherence and cross-spectrogram of the EDR and HRV signal in 5 min windows. A CNN was then trained to classify the sleep stages (wake, rapid-eye-movement (REM) sleep, non-REM (NREM) light sleep and NREM deep sleep) from the corresponding CRC spectrograms. A support vector machine was then used to combine the output of CNN with the other features derived from the ECG, including phase-rectified signal averaging (PRSA), sample entropy, as well as standard spectral and temporal HRV measures. The MIT-BIH Polysomnographic Database (SLPDB), the PhysioNet/Computing in Cardiology Challenge 2018 database (CinC2018) and the Sleep Heart Health Study (SHHS) database, all expert-annotated for sleep stages, were used to train and validate the algorithm. Main results: Ten-fold cross validation results showed that the proposed algorithm achieved an accuracy (Acc) of 75.4% and a Cohen’s kappa coefficient of = 0.54 on the out of sample validation data in the classification of Wake, REM, NREM light and deep sleep in SLPDB. This rose to Acc = 81.6% and = 0.63 for the classification of Wake, REM sleep and NREM sleep and Acc = 85.1% and = 0.68 for the classification of NREM sleep versus REM/wakefulness in SLPDB. Significance: The proposed ECG-based sleep stage classification approach that represents the highest reported results on non-electroencephalographic data and uses datasets over ten times larger than those in previous studies. By using a state-of-the-art QRS detector and deep learning model, the system does not require human annotation and can therefore be scaled for mass analysis.
Objective. To develop a sleep staging method from wrist-worn accelerometry and the photoplethysmogram (PPG) by leveraging transfer learning from a large electrocardiogram (ECG) database. Approach. In previous work, we developed a deep convolutional neural network for sleep staging from ECG using the cross-spectrogram of ECG-derived respiration and instantaneous beat intervals, heart rate variability metrics, spectral characteristics, and signal quality measures derived from 5793 subjects in Sleep Heart Health Study (SHHS). We updated the weights of this model by transfer learning using PPG data derived from the Empatica E4 wristwatch worn by 105 subjects in the ‘Emory Twin Study Follow-up’ (ETSF) database, for whom overnight polysomnographic (PSG) scoring was available. The relative performance of PPG, and actigraphy (Act), plus combinations of these two signals, with and without transfer learning was assessed. Main results. The performance of our model with transfer learning showed higher accuracy (1–9 percentage points) and Cohen’s Kappa (0.01–0.13) than those without transfer learning for every classification category. Statistically significant, though relatively small, incremental differences in accuracy occurred for every classification category as tested with the McNemar test. The out-of-sample classification performance using features from PPG and actigraphy for four-class classification was Accuracy (Acc) = 68.62% and Kappa = 0.44. For two-class classification, the performance was Acc = 81.49% and Kappa = 0.58. Significance. We proposed a combined PPG and actigraphy-based sleep stage classification approach using transfer learning from a large ECG sleep database. Results demonstrate that the transfer learning approach improves estimates of sleep state. The use of automated beat detectors and quality metrics means human over-reading is not required, and the approach can be scaled for large cross-sectional or longitudinal studies using wrist-worn devices for sleep staging.
Objective: Ventricular contractions in healthy individuals normally follow the contractions of atria to facilitate more efficient pump action and cardiac output. With a ventricular ectopic beat (VEB), volume within the ventricles are pumped to the body's vessels before receiving blood from atria, thus causing inefficient blood circulation. VEBs tend to cause perturbations in the instantaneous heart rate time series, making the analysis of heart rate variability inappropriate around such events, or requiring special treatment (such as signal averaging). Moreover, VEB frequency can be indicative of life-threatening problems. However, VEBs can often mimic artifacts both in morphology and timing. Identification of VEBs is therefore an important unsolved problem. The aim of this study is to introduce a method of wavelet transform in combination with deep learning network for the classification of VEBs. Approach: We proposed a method to automatically discriminate VEB beats from other beats and artifacts with the use of wavelet transform of the electrocardiogram (ECG) and a convolutional neural network (CNN). Three types of wavelets (Morlet wavelet, Paul wavelet and Gaussian derivative) were used to transform segments of single-channel (1D) ECG waveforms to two-dimensional (2D) time-frequency 'images'. The 2D time-frequency images were then passed into a CNN to optimize the convolutional filters and classification. Ten-fold cross validation was used to evaluate the approach on the MIT-BIH arrhythmia database (MIT-BIH). The American Heart Association (AHA) database was then used as an independent dataset to evaluate the trained network. Main results: Ten-fold cross validation results on MIT-BIH showed that the proposed algorithm with Paul wavelet achieved an overall F1 score of 84.94% and accuracy of 97.96% on out of sample validation. Independent test on AHA resulted in an F1 score of 84.96% and accuracy of 97.36%. Significance: The trained network possessed exceptional transferability across databases and generalization to unseen data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.