Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1155
|View full text |Cite
|
Sign up to set email alerts
|

Using LSTMs to Assess the Obligatoriness of Phonological Distinctive Features for Phonotactic Learning

Abstract: To ascertain the importance of phonetic information in the form of phonological distinctive features for the purpose of segmentlevel phonotactic acquisition, we compare the performance of two recurrent neural network models of phonotactic learning: one that has access to distinctive features at the start of the learning process, and one that does not. Though the predictions of both models are significantly correlated with human judgments of non-words, the feature-naive model significantly outperforms the featu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(16 citation statements)
references
References 22 publications
2
14
0
Order By: Relevance
“…The goal of this paper is to expand on the successes of this ongoing collective research programme. The algorithm described below shares many aspects with past work, such as vector embedding (Powers 1997, Calderone 2009, Goldsmith & Xanthos 2009, Nazarov 2014, 2016, Silfverberg et al 2018, Mirea & Bicknell 2019, normalisation (Powers 95 Learning phonological classes from distributional similarity 1997, Silfverberg et al 2018), matrix decomposition (Powers 1997, Calderone 2009, Goldsmith & Xanthos 2009, Silfverberg et al 2018 and clustering algorithms (Powers 1997, Nazarov 2014, 2016, Mirea & Bicknell 2019. The innovations that will be presented below are largely in the combination and extension of these techniques, but the clustering methodology presented is relatively novel.…”
Section: Previous Workmentioning
confidence: 98%
See 4 more Smart Citations
“…The goal of this paper is to expand on the successes of this ongoing collective research programme. The algorithm described below shares many aspects with past work, such as vector embedding (Powers 1997, Calderone 2009, Goldsmith & Xanthos 2009, Nazarov 2014, 2016, Silfverberg et al 2018, Mirea & Bicknell 2019, normalisation (Powers 95 Learning phonological classes from distributional similarity 1997, Silfverberg et al 2018), matrix decomposition (Powers 1997, Calderone 2009, Goldsmith & Xanthos 2009, Silfverberg et al 2018 and clustering algorithms (Powers 1997, Nazarov 2014, 2016, Mirea & Bicknell 2019. The innovations that will be presented below are largely in the combination and extension of these techniques, but the clustering methodology presented is relatively novel.…”
Section: Previous Workmentioning
confidence: 98%
“…Their use of maximum entropy Hidden Markov Models also involves a kind of one-dimensional clustering on emission probability ratios, setting a threshold of 0 as the boundary between clusters. Powers (1997) and Mirea & Bicknell (2019) both use hierarchical clustering to extract classes from embeddings. Hierarchical clustering is simple, but not well suited to phonological class discovery: it cannot find multiple partitions of the same set of sounds, and requires the number of classes to be decided by an analyst.…”
Section: K-means Clusteringmentioning
confidence: 99%
See 3 more Smart Citations