Proceedings of the 18th International Workshop on Treebanks and Linguistic Theories (TLT, SyntaxFest 2019) 2019
DOI: 10.18653/v1/w19-7807
|View full text |Cite
|
Sign up to set email alerts
|

Weighted posets: Learning surface order from dependency trees

Abstract: This paper presents a novel algorithm for generating a surface word order for a sentence given its dependency tree using a two-stage process. Using dependency-based word embeddings and a Graph Neural Network, the algorithm first learns how to rewrite a dependency tree as a partially ordered set (poset) with edge-weights representing dependency distance. The subsequent topological sort of this poset reflects a surface word order. The algorithm is evaluated against a naive baseline of average dependency distance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…Zhang et al (2012), Zhang (2013), Zhang and Clark (2015), Puduppully et al (2016), andSong et al (2018) used BLEU;Schmaltz et al (2016) parsed their outputs and calculated the UAS parsing metric; Filippova and Strube (2009) used Kendall correlation together with edit-distance to account for English word order. Similarly, Dyer (2019) used Spearman correlation between produced and gold word order for a dozen of languages. White and Rajkumar (2012), in their CCG-based realization, calculated average dependency lengths between grammar-generated sentences and gold standard.…”
Section: Related Workmentioning
confidence: 99%
“…Zhang et al (2012), Zhang (2013), Zhang and Clark (2015), Puduppully et al (2016), andSong et al (2018) used BLEU;Schmaltz et al (2016) parsed their outputs and calculated the UAS parsing metric; Filippova and Strube (2009) used Kendall correlation together with edit-distance to account for English word order. Similarly, Dyer (2019) used Spearman correlation between produced and gold word order for a dozen of languages. White and Rajkumar (2012), in their CCG-based realization, calculated average dependency lengths between grammar-generated sentences and gold standard.…”
Section: Related Workmentioning
confidence: 99%
“…The DepDist approach to linearization relies on dependency distance tolerance, the idea that a dependent and head tolerate a certain contextual distance, measured as the number of intervening words, relative to other words in a sentence (Dyer, 2019). This dependency distance tolerance is learned from training data via a graph neural network (GNN) implemented within the Graph Nets framework (Battaglia et al, 2018) based on word2vecf syntactic embeddings (Levy and Goldberg, 2014).…”
Section: Dependency Distance Tolerancementioning
confidence: 99%