2022
DOI: 10.31234/osf.io/bmweu
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual attention and language exposure during everyday activities: an at-home study of early word learning using wearable eye trackers

Abstract: Early language learning relies on statistical regularities that exist across timescales in infants’ lives. Two types of these statistical regularities are the routine activities that make up their day, such as mealtime and play, and the real-time repeated behaviors that make up the moment-by-moment dynamics of those routines. These two types of regularities are different in nature and are embedded at two different temporal scales, which led to divergent research in the literature – those who collect long-form … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…The HOME lab environment is designed to answer this question as we can collect data from parent‐infant interactions across a diversity of tasks, including book reading, play with more interactive toys (like a ball maze), meal preparation and feeding (Peters et al., 2020), and “grooming” the infant (after particularly messy snack times). Expanding our studies of dyadic behavior to different routines will improve our understanding of parental responsiveness in different contexts and the mechanisms underlying language learning and cognitive development (Schroer et al., 2022; Tamis‐LeMonda et al., 2019).…”
Section: Discussionmentioning
confidence: 99%
“…The HOME lab environment is designed to answer this question as we can collect data from parent‐infant interactions across a diversity of tasks, including book reading, play with more interactive toys (like a ball maze), meal preparation and feeding (Peters et al., 2020), and “grooming” the infant (after particularly messy snack times). Expanding our studies of dyadic behavior to different routines will improve our understanding of parental responsiveness in different contexts and the mechanisms underlying language learning and cognitive development (Schroer et al., 2022; Tamis‐LeMonda et al., 2019).…”
Section: Discussionmentioning
confidence: 99%
“…The bulk of this research has focused on children's visual input-how often different objects are in view (Clerkin & Smith, 2022;B. Long et al, 2021), how often they are the focus of joint attention (Bergelson et al, 2019;Schroer et al, 2022), or how often they are interacted with by children and their caregivers (Suarez-Rivera et al, 2022;Swirbul et al, 2022). Some studies have taken advantage of information provided in the visual signal to characterize other aspects of children's home experiences, including their physical proximity to (Suarez-Rivera et al, 2023) and touch or gesture from adult caregivers (Abu-Zhaya et al, 2017;Kosie & Lew-Williams, 2023), along with their physical location in space throughout the home (Custode & Tamis-LeMonda, 2020;Roy et al, 2015).…”
Section: Characterizing Multimodal Inputmentioning
confidence: 99%
“…In recent years, we have seen the development of many new systems for capturing at-home egocentric video data, including head-worn cameras, such as BabyView (B. Long et al, 2023) and EgoActive (Geangu et al, 2023), as well as advancements in head-mounted eye-trackers (Schroer et al, 2022). Personal security cameras (similar to police cameras) open up another off-the-shelf option.…”
Section: Characterizing Multimodal Inputmentioning
confidence: 99%
See 1 more Smart Citation