2019
DOI: 10.31234/osf.io/ptd3a
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generative Adversarial Phonology: Modeling unsupervised allophonic learning with neural networks

Abstract:

Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations. This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture and proposes a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties. The Generative Adversarial architecture is uniquely appr… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 98 publications
(138 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?