Successful communication requires attention to people and the complex signals they generate. Adapting attention from one moment to the next allows speakers and listeners to produce and perceive cues in real time and to adjust their responses in ways that facilitate an efficient exchange of information (MacDonald, 2013a, 2013b). In order to break into these dynamics of communication and learn from their caregivers, young children have to process and adapt to the rich, multidimensional information embedded in child-directed speech (CDS; McMurray, 2016; Potter & Lew-Williams, 2019). While it is well-known that young children prefer listening to CDS over adult-directed speech (ADS; Cooper & Aslin, 1990; Fernald & Kuhl, 1987; Werker & McLeod, 1989), less is understood about their processing of and learning from specific features of CDS, such as its characteristically variable prosody, across both shorter and longer timescales. In the current set of studies, we explored how prosodic cues-specifically, the dynamics of pitch-affect young children's inthe-moment engagement with CDS, and in turn, how these differences in attention to pitch contours affect learning of new words. Learning in early childhood occurs in social contexts, and many experiments and theories have prioritized social cues in enabling the development of human cognition (Kuhl, 2007; Tomasello, 1992; Vygotsky, 1978). For example, Kuhl (2007) hypothesized that social interaction facilitates learning by modulating children's attention and arousal during key moments. Brand, Baldwin, and Ashburn
Emotions change from one moment to the next. They have a duration from seconds to hours, and then transition to other emotions. Here we describe the early ontology of these key aspects of emotion dynamics. In five cross-sectional studies that combine parent surveys and ecological momentary assessment, we characterize how emotion duration and transitions change over the first five years of life, and how they relate to children’s language development. Over this developmental period, the duration of children’s emotions increased, and emotion transitions became increasingly organized by valence, such that children were more likely to transition between similarly valenced emotions. Children with these more mature emotion profiles also had larger vocabularies and produced labels for the tested emotions. These findings advance our understanding of emotion and communication by highlighting their intertwined nature in development and by charting how dynamic features of emotional experiences change over the first years of life.
How do young children learn to organize the statistics of communicative input across milliseconds and months? Developmental science has made progress in elucidating how infants learn patterns in language and how infant-directed speech is engineered to ease short-timescale processing, but less is known about how children link perceptual experiences across multiple levels of processing within an interaction (from syllables to stories) and across development. In this article, we propose that three domains of research—statistical summary, neural processing hierarchies, and neural coupling—will be fruitful in uncovering the dynamic exchange of information between children and adults, both in the moment and in aggregate. In particular, we discuss how the study of brain-to-brain and brain-to-behavior coupling between children and adults will advance the field’s understanding of how children’s neural representations become aligned with the increasingly complex statistics of communication across timescales.
Learning about emotions is an important part of children's social and communicative development. How does children's emotion-related vocabulary emerge over development? How may emotion-related information in caregiver input support learning of emotion labels and other emotion-related words? This investigation examined language production and input among English-speaking toddlers (16-30 months) using two datasets: Wordbank (N = 5520; 36% female, 38% male, and 26% unknown gender; 1% Asian, 4% Black, 2% Hispanic, 40% White, 2% others, and 50% unknown ethnicity; collected in North America; dates of data collection unknown) and Child Language Data Exchange System (N = 587; 46% female, 44% male, 9% unknown gender, all unknown ethnicity; collected in North America and the UK; data collection dates, were available between 1962 and 2009). First, we show that toddlers develop the vocabulary to express increasingly wide ranges of emotional information during the first 2 years of life. Computational measures of word valence showed that emotion labels are embedded in a rich network of words with related valence. Second, we show that caregivers leverage these semantic connections in ways that may scaffold children's learning of emotion and mental state labels. This research suggests that young children use the dynamics of language input to construct emotion word meanings, and provides new techniques for defining the quality of infant-directed speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.