2019
DOI: 10.1111/cogs.12740
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Learning of Unfamiliar Sounds as Trajectories Through a Perceptual Similarity Space

Abstract: In typical statistical learning studies, researchers define sequences in terms of the probability of the next item in the sequence given the current item (or items), and they show that high probability sequences are treated as more familiar than low probability sequences. Existing accounts of these phenomena all assume that participants represent statistical regularities more or less as they are defined by the experimenters—as sequential probabilities of symbols in a string. Here we offer an alternative, or po… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 57 publications
2
5
0
Order By: Relevance
“…If bi has been discovered as a frequent unit, {bi}{ci…} becomes more probable; if bici is common relative to bi or ci , then {bici} becomes more probable. This basic mechanism is conceptually similar to the mechanism of some other computational models that have similarly argued that an apparent sensitivity to conditional probability variation could emerge without actual computation of conditional probability per se (Cabiddu et al., 2023; Perruchet & Vinter, 1998; see also Wang, Hutton, & Zevin, 2019). Denying that explicit computation of syllabic transitional probability statistics is necessary for statistical learning leaves open several fundamental questions about the nature of the learning.…”
Section: Discussionsupporting
confidence: 63%
See 1 more Smart Citation
“…If bi has been discovered as a frequent unit, {bi}{ci…} becomes more probable; if bici is common relative to bi or ci , then {bici} becomes more probable. This basic mechanism is conceptually similar to the mechanism of some other computational models that have similarly argued that an apparent sensitivity to conditional probability variation could emerge without actual computation of conditional probability per se (Cabiddu et al., 2023; Perruchet & Vinter, 1998; see also Wang, Hutton, & Zevin, 2019). Denying that explicit computation of syllabic transitional probability statistics is necessary for statistical learning leaves open several fundamental questions about the nature of the learning.…”
Section: Discussionsupporting
confidence: 63%
“…We consider it likely that parsed tokens become chunks that are mentally represented as such, and that may become protolexical units available for entry into syntactic and semantic linguistic networks if they continue to be supported in further language experience (e.g., Swingley, 2007). But how and when this happens is a matter of debate, and may involve multiple neurally distinct processes (e.g., Henin et al., 2021; Sučević & Schapiro, 2023; Wang et al., 2019). The modeling presented here is neutral in this regard.…”
Section: Discussionmentioning
confidence: 99%
“…In particular, two theories can be tested. One posits that SL reflects changes in the similarity space and that transitions are then learned as trajectories through that space (32). Another account conceives SL as acquiring a community structure in a symmetric graph with uniform TPs, which are captured by changes in representational similarity (7).…”
Section: Discussionmentioning
confidence: 99%
“…This could potentially be explained by language-specific differences, where the similarity space of orthography to words (hence, phonology to words) works differently in Hebrew than in most other languages (Velan & Frost, 2009). Lastly, different syllable sequences have different locations in the phonetic similarity space that the difficulty of learning these specific word forms (Emberson, Liu, Zevin, 2013;Wang, Hutton & Zevin, 2019), which is an additional factor in learnability. As such, this work calls for a computational model that takes into consideration the linguistic knowledge of a learner and how that knowledge biases the segmentation process, a fascinating topic for future work.…”
Section: Discussionmentioning
confidence: 99%