2020
DOI: 10.1111/cogs.12812
|View full text |Cite
|
Sign up to set email alerts
|

Density and Distinctiveness in Early Word Learning: Evidence From Neural Network Simulations

Abstract: High phonological neighborhood density has been associated with both advantages and disadvantages in early word learning. High density may support the formation and fine-tuning of new word sound memories-a process termed lexical configuration (e.g., Storkel, 2004). However, new high-density words are also more likely to be misunderstood as instances of known words, and may therefore fail to trigger the learning process (e.g., Swingley & Aslin, 2007). To examine these apparently contradictory effects, we traine… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…We find that the model can correctly predict the crosslinguistic pattern for the Mandarin Chinese contrast but not for the Catalan contrast. In Study 2, we consider four other models developed in the speech technology community; these models are all state-of-the-art extensions of the well-known autoencoder neural network (Kramer, 1991) commonly used in modeling statistical language learning (e.g., Jones & Brandt, 2020;Mareschal & French, 2017;Plaut & Vande Velde, 2017). We evaluate these four algorithms on the same three datasets, to study whether any of the algorithms can correctly predict the discrimination patterns for all the three contrasts, potentially providing a better model of infant phonetic learning than the one proposed in Schatz et al (2021).…”
Section: Introductionmentioning
confidence: 99%
“…We find that the model can correctly predict the crosslinguistic pattern for the Mandarin Chinese contrast but not for the Catalan contrast. In Study 2, we consider four other models developed in the speech technology community; these models are all state-of-the-art extensions of the well-known autoencoder neural network (Kramer, 1991) commonly used in modeling statistical language learning (e.g., Jones & Brandt, 2020;Mareschal & French, 2017;Plaut & Vande Velde, 2017). We evaluate these four algorithms on the same three datasets, to study whether any of the algorithms can correctly predict the discrimination patterns for all the three contrasts, potentially providing a better model of infant phonetic learning than the one proposed in Schatz et al (2021).…”
Section: Introductionmentioning
confidence: 99%
“…We modelled neurotypical and neurodivergent information seeking using autoencoder neural networks (Figure 1). Autoencoders have been used to simulate a wide range of child behaviours, from categorisation and visual object processing to curiosity‐driven learning and language acquisition (Jones & Brandt, 2020; Mareschal et al., 2000; Twomey & Westermann, 2018; Westermann et al., 2009; Westermann & Mareschal, 2004, 2012). Autoencoders are a class of self‐supervised neural network, which form representations of their learning environment by adaptively updating internal weights to minimise the difference between the input they receive and a re‐construction of that input that they produce.…”
Section: Simulationsmentioning
confidence: 99%
“…We modelled neurotypical and neurodivergent information seeking using autoencoder neural networks (Figure 1). Autoencoders have been used to simulate a wide range of child behaviours, from categorisation and visual object processing to curiosity-driven learning and language acquisition (Jones & Brandt, 2020;Mareschal et al, 2000;Twomey & Westermann, 2018;Westermann et al, 2009;Westermann & Mareschal, 2004, 2012. Autoencoders are a class of self-supervised neural network, which form representations of their learning environment by adaptively updating internal weights to minimise the difference between the input they receive and a re-construction of…”
Section: Model Architecturementioning
confidence: 99%