Existing HRV toolboxes do not include standardized preprocessing, signal quality indices (for noisy segment removal), and abnormal rhythm detection and are therefore likely to lead to significant errors in the presence of moderate to high noise or arrhythmias. We therefore describe the inclusion of validated tools to address these issues. We also make recommendations for default values and testing/reporting.
To the authors' knowledge, this is the first time that a framework for the low-power CS compression of fetal abdominal ECG is proposed combined with a beat detector for an fHR estimation.
Continuous monitoring of respiratory activity is desirable in many clinical applications to detect respiratory events. Non-contact monitoring of respiration can be achieved with near-and far-infrared spectrum cameras. However, current technologies are not sufficiently robust to be used in clinical applications. For example, they fail to estimate an accurate respiratory rate (RR) during apnea. We present a novel algorithm based on multispectral data fusion that aims at estimating RR also during apnea. The algorithm independently addresses the RR estimation and apnea detection tasks. Respiratory information is extracted from multiple sources and fed into an RR estimator and an apnea detector whose results are fused into a final respiratory activity estimation. We evaluated the system retrospectively using data from 30 healthy adults who performed diverse controlled breathing tasks while lying supine in a dark room and reproduced central and obstructive apneic events. Combining multiple respiratory information from multispectral cameras improved the root mean square error (RMSE) accuracy of the RR estimation from up to 4.64 monospectral data down to 1.60 breaths/min. The median F1 scores for classifying obstructive (0.75 to 0.86) and central apnea (0.75 to 0.93) also improved. Furthermore, the independent consideration of apnea detection led to a more robust system (RMSE of 4.44 vs. 7.96 breaths/min). Our findings may represent a step towards the use of cameras for vital sign monitoring in medical applications.
Objective. To develop a sleep staging method from wrist-worn accelerometry and the photoplethysmogram (PPG) by leveraging transfer learning from a large electrocardiogram (ECG) database. Approach. In previous work, we developed a deep convolutional neural network for sleep staging from ECG using the cross-spectrogram of ECG-derived respiration and instantaneous beat intervals, heart rate variability metrics, spectral characteristics, and signal quality measures derived from 5793 subjects in Sleep Heart Health Study (SHHS). We updated the weights of this model by transfer learning using PPG data derived from the Empatica E4 wristwatch worn by 105 subjects in the ‘Emory Twin Study Follow-up’ (ETSF) database, for whom overnight polysomnographic (PSG) scoring was available. The relative performance of PPG, and actigraphy (Act), plus combinations of these two signals, with and without transfer learning was assessed. Main results. The performance of our model with transfer learning showed higher accuracy (1–9 percentage points) and Cohen’s Kappa (0.01–0.13) than those without transfer learning for every classification category. Statistically significant, though relatively small, incremental differences in accuracy occurred for every classification category as tested with the McNemar test. The out-of-sample classification performance using features from PPG and actigraphy for four-class classification was Accuracy (Acc) = 68.62% and Kappa = 0.44. For two-class classification, the performance was Acc = 81.49% and Kappa = 0.58. Significance. We proposed a combined PPG and actigraphy-based sleep stage classification approach using transfer learning from a large ECG sleep database. Results demonstrate that the transfer learning approach improves estimates of sleep state. The use of automated beat detectors and quality metrics means human over-reading is not required, and the approach can be scaled for large cross-sectional or longitudinal studies using wrist-worn devices for sleep staging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.