Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.347
|View full text |Cite
|
Sign up to set email alerts
|

Second-Order Unsupervised Neural Dependency Parsing

Abstract: Most of the unsupervised dependency parsers are based on first-order probabilistic generative models that only consider local parent-child information. Inspired by second-order supervised dependency parsing, we proposed a second-order extension of unsupervised neural dependency models that incorporate grandparent-child or sibling information. We also propose novel design of the neural parameterization and optimization methods of the dependency models. In secondorder models, the number of grammar rules grows cu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 20 publications
0
14
0
2
Order By: Relevance
“…We conduct experiments on the English Penn Treebank using the WSJ corpus and convert the annotated constituency parses to Stanford Dependency Formalism (de Marneffe et al, 2006). Following Yang et al (2020), we evaluate on sentences of length ≤ 10 in the test split, which contains 389 sentences (Appendix B.1 describes the same experiment on longer sentences, which have similar results). We experiment with the bert-base-cased model (more details in Appendix A) and evaluate by the undirected unlabeled attachment score (UUAS).…”
Section: Analysis: Unsupervised Parsingmentioning
confidence: 99%
“…We conduct experiments on the English Penn Treebank using the WSJ corpus and convert the annotated constituency parses to Stanford Dependency Formalism (de Marneffe et al, 2006). Following Yang et al (2020), we evaluate on sentences of length ≤ 10 in the test split, which contains 389 sentences (Appendix B.1 describes the same experiment on longer sentences, which have similar results). We experiment with the bert-base-cased model (more details in Appendix A) and evaluate by the undirected unlabeled attachment score (UUAS).…”
Section: Analysis: Unsupervised Parsingmentioning
confidence: 99%
“…In other unsupervised syntactic induction tasks, there is also a trend to use neural networks to produce grammar rule probabilities. In unsupervised dependency parsing, the Dependency Model with Valence (DMV) (Klein and Manning, 2004) has been parameterized neurally to achieve higher induction accuracy (Jiang et al, 2016;Yang et al, 2020). In part-of-speech (POS) induction, neurally parameterized Hidden Markov Models (HMM) also achieve state-of-the-art results (Tran et al, 2016;He et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…In unsupervised dependency parsing, most methods are based on Dependency Model with Valence (DMV) (Klein and Manning, 2004). Neurally parameterized DMVs have obtained state-of-the-art performance (Jiang et al, 2016;Han et al, 2017Han et al, , 2019Yang et al, 2020). However, they rely on gold POS tags and sophisticated initializations (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, they make a strong independence assumption on the generation of the child word such that it is only dependent on the nonterminal symbol. Bilexical dependencies, which have been shown useful in unsupervised dependency parsing (Han et al, 2017;Yang et al, 2020), are thus ignored.…”
Section: Introductionmentioning
confidence: 99%