Proceedings of the 2019 Conference of the North 2019
DOI: 10.18653/v1/n19-1007
|View full text |Cite
|
Sign up to set email alerts
|

Measuring the perceptual availability of phonological features during language acquisition using unsupervised binary stochastic autoencoders

Abstract: In this paper, we deploy binary stochastic neural autoencoder networks as models of infant language learning in two typologically unrelated languages (Xitsonga and English). We show that the drive to model auditory percepts leads to latent clusters that partially align with theory-driven phonemic categories. We further evaluate the degree to which theorydriven phonological features are encoded in the latent bit patterns, finding that some (e.g. [±approximant]), are well represented by the network in both langu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
28
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(29 citation statements)
references
References 106 publications
1
28
0
Order By: Relevance
“…Features have long been in the center of phonetic and phonological literature (Trubetzkoy, 1939 ; Chomsky and Halle, 1968 ; Clements, 1985 ; Dresher, 2015 ; Shain and Elsner, 2019 ). Extracting features based on unsupervised learning of pre-segmented phones with neural networks has recently seen success in the autoencoder architecture (Räsänen et al, 2016 ; Eloff et al, 2019 ; Shain and Elsner, 2019 ). Shain and Elsner ( 2019 ) train an autoencoder with binary stochastic neurons on pre-segmented speech data and argue that bits in the code of the autoencoder network imperfectly correspond to phonological features as posited by phonological theory.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Features have long been in the center of phonetic and phonological literature (Trubetzkoy, 1939 ; Chomsky and Halle, 1968 ; Clements, 1985 ; Dresher, 2015 ; Shain and Elsner, 2019 ). Extracting features based on unsupervised learning of pre-segmented phones with neural networks has recently seen success in the autoencoder architecture (Räsänen et al, 2016 ; Eloff et al, 2019 ; Shain and Elsner, 2019 ). Shain and Elsner ( 2019 ) train an autoencoder with binary stochastic neurons on pre-segmented speech data and argue that bits in the code of the autoencoder network imperfectly correspond to phonological features as posited by phonological theory.…”
Section: Discussionmentioning
confidence: 99%
“…Extracting features based on unsupervised learning of pre-segmented phones with neural networks has recently seen success in the autoencoder architecture (Räsänen et al, 2016 ; Eloff et al, 2019 ; Shain and Elsner, 2019 ). Shain and Elsner ( 2019 ) train an autoencoder with binary stochastic neurons on pre-segmented speech data and argue that bits in the code of the autoencoder network imperfectly correspond to phonological features as posited by phonological theory. As was argued in Section 4.3, our model shows traces of imperfect self-organizing of phonetic features (e.g., spectral moments) and phonological representations (e.g., the presence of [s]) in the latent space, while learning allophonic distributions at the same time.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, neural network models for unsupervised feature extraction have seen success in modeling acquisition of phonetic features from raw acoustic data (Räsänen et al, 2016;Eloff et al, 2019;Shain and Elsner, 2019). The model in Shain and Elsner (2019), for example, is an autoencoder neural network that is trained on pre-segmented acoustic data. The model takes as input segmented acoustic data and outputs values that can be correlated to phonological features.…”
Section: Previous Workmentioning
confidence: 99%
“…Extracting features based on unsupervised learning of presegmented phones with neural networks has recently seen success in the autoencoder architecture (Räsänen et al, 2016;Eloff et al, 2019;Shain and Elsner, 2019). Shain and Elsner (2019) train an autoencoder with binary stochastic neurons on presegmented speech data and argue that bits in the code of the autoencoder network imperfectly correspond to phonological features as posited by phonological theory. As was argued in Section 4.3, our model shows traces of imperfect selforganizing of phonetic features (e.g., spectral moments) and phonological representations (e.g., the presence of [s]) in the latent space, while learning allophonic distributions at the same time.…”
Section: Latent Variables As Correlates Of Featuresmentioning
confidence: 99%