2022
DOI: 10.1146/annurev-linguistics-031220-011146
|View full text |Cite
|
Sign up to set email alerts
|

Learning Through Processing: Toward an Integrated Approach to Early Word Learning

Abstract: Children's linguistic knowledge and the learning mechanisms by which they acquire it grow substantially in infancy and toddlerhood, yet theories of word learning largely fail to incorporate these shifts. Moreover, researchers’ often-siloed focus on either familiar word recognition or novel word learning limits the critical consideration of how these two relate. As a step toward a mechanistic theory of language acquisition, we present a framework of “learning through processing” and relate it to the prevailing … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 128 publications
0
12
0
Order By: Relevance
“…Although our primary aim was establishing the learnability of word-referent mappings with minimal ingredients, CVCL’s successes do not rule out more sophisticated forms of representation and reasoning, especially ones that might emerge in later development ( 55 ). These include mutual exclusivity ( 13 ), the principle of contrast ( 12 ), the shape bias ( 56 ), syntactic cues ( 57 ), social or gestural cues ( 15 ), or hypothesis generation ( 58 ).…”
Section: Discussionmentioning
confidence: 99%
“…Although our primary aim was establishing the learnability of word-referent mappings with minimal ingredients, CVCL’s successes do not rule out more sophisticated forms of representation and reasoning, especially ones that might emerge in later development ( 55 ). These include mutual exclusivity ( 13 ), the principle of contrast ( 12 ), the shape bias ( 56 ), syntactic cues ( 57 ), social or gestural cues ( 15 ), or hypothesis generation ( 58 ).…”
Section: Discussionmentioning
confidence: 99%
“…Our multimodal training setup also does not capture the full richness of the multimodal signals that children may receive. Beyond imperfections in preprocessing (Section 2) and the inherent stochasticity in a child's gaze (Yu, Zhang, Slone, & Smith, 2021), the use of tokenized text rather than audio removes phonological or morphological cues, while also treating segmentation capabilities as given (Meylan & Bergelson, 2022). We mainly focused on linguistic analyses that are applicable to text‐only setups, because this enables us to better isolate the contribution of introducing multimodality.…”
Section: Discussionmentioning
confidence: 99%
“…Other work has explored this problem of scalability in a variety of ways, from early multimodal approaches (Roy & Pentland, 2002), to more recent work using large-scale naturalistic headcam data (Orhan, Gupta, & Lake, 2020;Tsutsui, Chandrasekaran, Reza, Crandall, & Yu, 2020) and studying the ways in which children or machines an active role in word learning (Gelderloos, Kamelabad, & Alishahi, 2020;Zettersten & Saffran, 2019). The fact multimodal neural networks can be trained from scratch, as demonstrated in Experiment 7 and other works (Harwath et al, 2018;Radford et al, 2021), suggests that these kinds of networks could be further developed to provide a unifying account of artificial word learning in the lab and naturalistic word learning in the wild (Meylan & Bergelson, 2021). Finally, while we attempted to test a broad range of phenomena, our list was by no means exhaustive.…”
Section: Discussionmentioning
confidence: 99%