Wearable, multisensor, consumer devices that estimate sleep are now commonplace, but the algorithms used by these devices to score sleep are not open source, and the raw sensor data is rarely accessible for external use. As a result, these devices are limited in their usefulness for clinical and research applications, despite holding much promise. We used a mobile application of our own creation to collect raw acceleration data and heart rate from the Apple Watch worn by participants undergoing polysomnography, as well as during the ambulatory period preceding in lab testing. Using this data, we compared the contributions of multiple features (motion, local standard deviation in heart rate, and “clock proxy”) to performance across several classifiers. Best performance was achieved using neural nets, though the differences across classifiers were generally small. For sleep-wake classification, our method scored 90% of epochs correctly, with 59.6% of true wake epochs (specificity) and 93% of true sleep epochs (sensitivity) scored correctly. Accuracy for differentiating wake, NREM sleep, and REM sleep was approximately 72% when all features were used. We generalized our results by testing the models trained on Apple Watch data using data from the Multi-ethnic Study of Atherosclerosis (MESA), and found that we were able to predict sleep with performance comparable to testing on our own dataset. This study demonstrates, for the first time, the ability to analyze raw acceleration and heart rate data from a ubiquitous wearable device with accepted, disclosed mathematical methods to improve accuracy of sleep and sleep stage prediction.
Using data collected through smartphones, we assess the effects of age, sex, lighting, and home country on sleep.
From smart work scheduling to optimal drug timing, there is enormous potential in translating circadian rhythms research results for precision medicine in the real world. However, the pursuit of such effort requires the ability to accurately estimate circadian phase outside of the laboratory. One approach is to predict circadian phase non-invasively using light and activity measurements and mathematical models of the human circadian clock. Most mathematical models take light as an input and predict the effect of light on the human circadian system. However, consumer-grade wearables that are already owned by millions of individuals record activity instead of light, which prompts an evaluation of the accuracy of predicting circadian phase using motion alone. Here, we evaluate the ability of four different models of the human circadian clock to estimate circadian phase from data acquired by wrist-worn wearable devices. Multiple datasets across populations with varying degrees of circadian disruption were used for generalizability. Though the models we test yield similar predictions, analysis of data from 27 shift workers with high levels of circadian disruption shows that activity, which is recorded in almost every wearable device, is better at predicting circadian phase than measured light levels from wrist-worn devices when processed by mathematical models. In those living under normal living conditions, circadian phase can typically be predicted to within 1 hour, even with data from a widely available commercial device (the Apple Watch). These results show that circadian phase can be predicted using existing data passively collected by millions of individuals with comparable accuracy to much more invasive and expensive methods.
SUMMARY Daily rhythms in human physiology and behavior are driven by the interplay of circadian rhythms, environmental cycles, and social schedules. Much research has focused on the mechanism and function of circadian rhythms in constant conditions or in idealized light-dark environments. There have been comparatively few studies into how social pressures, such as work and school schedules, affect human activity rhythms day to day and season to season. To address this issue, we analyzed activity on Twitter in >1,500 US counties throughout the 2012–2013 calendar years in 15-min intervals, using geographically tagged tweets representing ≈0.1% of the total population each day. We find that sustained periods of low Twitter activity are correlated with sufficient sleep as measured by conventional surveys. We show that this nighttime lull in Twitter activity is shifted to later times on weekends relative to weekdays, a phenomenon we term “Twitter social jet lag.” The magnitude of this social jet lag varies seasonally and geographically, with the West Coast experiencing less Twitter social jet lag compared to the Central and Eastern US, and is correlated with average commuting schedules and disease risk factors such as obesity. Most counties experience the largest amount of Twitter social jet lag in February and the lowest in June or July. We present evidence that these shifts in weekday activity coincide with relaxed social pressures due to local K-12 school holidays, and that the direct seasonal effect of altered day length is comparatively weaker.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.