Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.381
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing analytical methods: The case of phonology in neural models of spoken language

Abstract: Given the fast development of analysis techniques for NLP and speech processing systems, few systematic studies have been conducted to compare the strengths and weaknesses of each method. As a step in this direction we study the case of representations of phonology in neural network models of spoken language. We use two commonly applied analytical techniques, diagnostic classifiers and representational similarity analysis, to quantify to what extent neural activation patterns encode phonemes and phoneme sequen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(29 citation statements)
references
References 25 publications
0
29
0
Order By: Relevance
“…In addition to reporting raw model performance, we report performance improvements from each model relative to (1) baseline U (untrained), an architecturally matched model left at random initialization (Chrupała et al, 2020), and (2) baseline X (cross-language), the architecturally matched model trained on the opposite language. 6 These two baselines quantify different contributions of the acquisition process.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to reporting raw model performance, we report performance improvements from each model relative to (1) baseline U (untrained), an architecturally matched model left at random initialization (Chrupała et al, 2020), and (2) baseline X (cross-language), the architecturally matched model trained on the opposite language. 6 These two baselines quantify different contributions of the acquisition process.…”
Section: Discussionmentioning
confidence: 99%
“…Following e.g. Shain and Elsner (2019) and Chrupała et al (2020), we do so using probing classifiers. In particular, for each layer of each model's encoder, we fit linear classifiers to (1) the phoneme labels and (2) the phonological feature labels associated with the gold phoneme segment corresponding to each phone boundary.…”
Section: Discussionmentioning
confidence: 99%
“…No individual study has attempted to look at the emergence of linguistic units at phonetic, syllabic, and lexical levels in a single model or study, nor compared multiple model architectures within the same experimental context. In addition, the existing studies have rarely reported baseline measures from untrained models, making it unclear how much of the findings are actually driven by the visually-guided parameter optimization compared to the effects of non-linear network dynamics also present with randomly initialized model parameters (see also Chrupała et al, 2020). This leaves unclear questions such as: 1) Can a single neural model reflect emergence of several levels of linguistic structure at the same time, including phone(me)s, syllables, and words, both in time and selectivity?…”
Section: Evidence For Language Representations In Vgs Modelsmentioning
confidence: 99%
“…different linguistic unit types. We deliberately focus on statistical and classifier-based measures of analysis that are suitable for basic level categorical data (phone, syllable, or word types), whereas measures such as representational similarity analysis (RSA; Kriegeskorte et al, 2008) used in some other works (e.g., Chrupała et al, 2020) are better suited for non-categorical reference data 3 .…”
Section: Selectivity Analysis Of Hidden Layer Activationsmentioning
confidence: 99%
See 1 more Smart Citation