From birth, humans constantly make decisions about what to look at and for how long. Yet the mechanism behind such decision-making remains poorly understood. Here we present the rational action, noisy choice for habituation (RANCH) model. RANCH is a rational learning model that takes noisy perceptual samples from stimuli and makes sampling decisions based on Expected Information Gain (EIG). The model captures key patterns of looking time documented in developmental research: habituation and dishabituation. We evaluated the model with adult looking time collected from a paradigm analogous to the infant habituation paradigm. We compared RANCH with baseline models (no learning model, no perceptual noise model) and models with alternative linking hypotheses (Surprisal, KL divergence). We showed that 1) learning and perceptual noise are critical assumptions of the model, and 2) Surprisal and KL are good proxies for EIG under the current learning context.
Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits, because the behaviors of interest, such as gaze duration and direction, even when collected online, still have to be extracted from video through a laborious process of manual annotation. Recent advances in computer vision raise the possibility of automated annotation of video data. In this paper, we built on a system for automatic gaze annotation in human infants, iCatcher (Erel et al., 2022), by engineering improvements, and then training and testing the system (hereafter, iCatcher+) on two datasets with substantial video and participant variability (214 videos collected in lab and mobile testing centers, and 265 videos collected via webcams in homes; infants and children aged 4 months to 3.5 years). We found that when trained on each of these video datasets, iCatcher+ performed with near human-level accuracy on held out videos on distinguishing “LEFT” and “RIGHT” looking behavior, and “ON” versus “OFF” looking behavior, across both datasets. This high performance was achieved at the level of individual frames, experimental trials, and study videos, held across participant demographics (e.g., age, race/ethnicity) and video characteristics (e.g., resolution, luminance), and generalized to a third, entirely held-out dataset. We close by discussing next steps required to fully automate the lifecycle of online infant and child behavioral studies, representing a key step towards enabling rapid, high-powered developmental research.
Much of our basic understanding of cognitive and social processes in infancy relies on measures of looking time, and specifically on infants’ visual preference for a novel or familiar stimulus. However, despite being the foundation of many behavioral tasks in infant research, the determinants of infants’ visual preferences are poorly understood, and differences in the expression of preferences can be difficult to interpret. In this large-scale study, we test predictions from the Hunter and Ames model of infants' visual preferences. We investigate the effects of three factors predicted by this model to determine infants’ preference for novel versus familiar stimuli: age, stimulus familiarity, and stimulus complexity. Drawing from a large and diverse sample of infant participants (N = XX), this study will provide crucial empirical evidence for a robust and generalizable model of infant visual preferences, leading to a more solid theoretical foundation for understanding the mechanisms that underlie infants’ responses in common behavioral paradigms. Moreover, our findings will guide future studies that rely on infants' visual preferences to measure cognitive and social processes.
Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and direction, still have to be extracted from video through a laborious process of manual annotation, even when these data are collected online. Recent advances in computer vision raise the possibility of automated annotation of these video data. In this article, we built on a system for automatic gaze annotation in young children, iCatcher, by engineering improvements and then training and testing the system (referred to hereafter as iCatcher+) on three data sets with substantial video and participant variability (214 videos collected in U.S. lab and field sites, 143 videos collected in Senegal field sites, and 265 videos collected via webcams in homes; participant age range = 4 months–3.5 years). When trained on each of these data sets, iCatcher+ performed with near human-level accuracy on held-out videos on distinguishing “LEFT” versus “RIGHT” and “ON” versus “OFF” looking behavior across all data sets. This high performance was achieved at the level of individual frames, experimental trials, and study videos; held across participant demographics (e.g., age, race/ethnicity), participant behavior (e.g., movement, head position), and video characteristics (e.g., luminance); and generalized to a fourth, entirely held-out online data set. We close by discussing next steps required to fully automate the life cycle of online infant and child behavioral studies, representing a key step toward enabling robust and high-throughput developmental research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.