Measurements of infants' quotidian experiences provide critical information about early development. However, the role of sampling methods in providing these measurements is rarely examined. Here we directly compare language input from hour-long video-recordings and daylong audio-recordings within the same group of 44 infants at 6 and 7 months. We compared 12 measures of language quantity and lexical diversity, talker variability, utterance-type, and object presence, finding moderate correlations across recording-types. However, video-recordings generally featured far denser noun input across these measures compared to the daylong audio-recordings, more akin to 'peak' audio hours (though not as high in talkers and word-types). Although audio-recordings captured ~10 times more awake-time than videos, the noun input in them was only 2-4 times greater. Notably, whether we compared videos to daylong audio-recordings or peak audio times, videos featured relatively fewer declaratives and more questions; furthermore, the most common video-recorded nouns were less consistent across families than the top audio-recording nouns were. Thus, hour-long videos and daylong audio-recordings revealed fairly divergent pictures of the language infants hear and learn from in their daily lives. We suggest that short video-recordings provide a dense and somewhat different sample of infants' language experiences, rather than a typical one, and should be used cautiously for extrapolation about common words, talkers, utterance-types, and contexts at larger timescales. If theories of language development are to be held accountable to 'facts on the ground' from observational data, greater care is needed to unpack the ramifications of sampling methods of early language input.
Background Loss-of-control (LOC) eating commonly develops during adolescence, and it predicts full-syndrome eating disorders and excess weight gain. Although negative emotions and emotion dysregulation are hypothesized to precede and predict LOC eating, they are rarely examined outside the self-report domain. Autonomic indices, including heart rate (HR) and heart rate variability (HRV), may provide information about stress and capacity for emotion regulation in response to stress. Methods We studied whether autonomic indices predict LOC eating in real-time in adolescents with LOC eating and body mass index (BMI) ⩾70th percentile. Twenty-four adolescents aged 12–18 (67% female; BMI percentile mean ± standard deviation = 92.6 ± 9.4) who reported at least twice-monthly LOC episodes wore biosensors to monitor HR, HRV, and physical activity for 1 week. They reported their degree of LOC after all eating episodes on a visual analog scale (0–100) using a smartphone. Results Adjusting for physical activity and time of day, higher HR and lower HRV predicted higher self-reported LOC after eating. Parsing between- and within-subjects effects, there was a significant, positive, within-subjects association between pre-meal HR and post-meal LOC rating. However, there was no significant within-subjects effect for HRV, nor were there between-subjects effects for either electrophysiologic variable. Conclusions Findings suggest that autonomic indices may either be a marker of risk for subsequent LOC eating or contribute to LOC eating. Linking physiological markers with behavior in the natural environment can improve knowledge of illness mechanisms and provide new avenues for intervention.
Objective. Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm. Approach. Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixedeffects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events. Main results. In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition. Significance. We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.