Periodicities (repeating patterns) are observed in many human behaviors. Their strength may capture untapped patterns that incorporate sleep, sedentary, and active behaviors into a single metric indicative of better health. We present a framework to detect periodicities from longitudinal wrist-worn accelerometry data. GENEActiv accelerometer data were collected from 20 participants (17 men, 3 women, aged 35–65) continuously for 64.4 ± 26.2 (range: 13.9 to 102.0) consecutive days. Cardiometabolic risk biomarkers and health-related quality of life metrics were assessed at baseline. Periodograms were constructed to determine patterns emergent from the accelerometer data. Periodicity strength was calculated using circular autocorrelations for time-lagged windows. The most notable periodicity was at 24 h, indicating a circadian rest-activity cycle; however, its strength varied significantly across participants. Periodicity strength was most consistently associated with LDL-cholesterol (r's = 0.40–0.79, P's < 0.05) and triglycerides (r's = 0.68–0.86, P's < 0.05) but also associated with hs-CRP and health-related quality of life, even after adjusting for demographics and self-rated physical activity and insomnia symptoms. Our framework demonstrates a new method for characterizing behavior patterns longitudinally which captures relationships between 24 h accelerometry data and health outcomes.
Abstract-Lifelogging is the ambient, continuous digital recording of a person's everyday activities for a variety of possible applications. Much of the work to date in lifelogging has focused on developing sensors, capturing information, processing it into events and then supporting event-based access to the lifelog for applications like memory recall, behaviour analysis or similar. With the recent arrival of aggregating platforms such as Apple's HealthKit, Microsoft's HealthVault and Google's Fit, we are now able to collect and aggregate data from lifelog sensors, to centralize the management of data and in particular to search for and detect patterns of usage for individuals and across populations. In this paper, we present a framework that detects both lowlevel and high-level periodicity in lifelog data, detecting hidden patterns of which users would not otherwise be aware. We detect periodicities of time series using a combination of correlograms and periodograms, using various signal processing algorithms. Periodicity detection in lifelogs is particularly challenging because the lifelog data itself is not always continuous and can have gaps as users may use their lifelog devices intermittingly. To illustrate that periodicity can be detected from such data, we apply periodicity detection on three lifelog datasets with varying levels of completeness and accuracy.
This paper introduces a new way to analyse and visualize quantified-self or lifelog data captured from any lifelogging device over an extended period of time. The mechanism works on the raw, unstructured lifelog data by detecting periodicities, those repeating patters that occur within our lifestyles at different frequencies including daily, weekly, seasonal, etc. Focusing on the 24 hour cycle, we calculate the strength of the 24-hour periodicity at 24-hour intervals over an extended period of a lifelog. Changes in this strength of the 24-hour cycle can illustrate changes or shifts in underlying human behavior. We have performed this analysis on several lifelog datasets of durations from several weeks to almost a decade, from recordings of training distances to sleep data. In this paper we use 24 hour accelerometer data to illustrate the technique, showing how changes in human behavior can be identified.
This paper focuses on the problem of visual saliency prediction, predicting regions of an image that tend to attract human visual attention, under a constrained computational budget. We modify and test various recent efficient convolutional neural network architectures like EfficientNet and MobileNetV2 and compare them with existing state-of-the-art saliency models such as SalGAN and DeepGaze II both in terms of standard accuracy metrics like Area Under Curve (AUC) and Normalized Scanpath Saliency (NSS), and in terms of the computational complexity and model size. We find that MobileNetV2 makes an excellent backbone for a visual saliency model and can be effective even without a complex decoder. We also show that knowledge transfer from a more computationally expensive model like DeepGaze II can be achieved via pseudo-labelling an unlabelled dataset, and that this approach gives result on-par with many state-of-the-art algorithms with a fraction of the computational cost and model size.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.