2015
DOI: 10.1162/tacl_a_00153
|View full text |Cite
|
Sign up to set email alerts
|

Approximation-Aware Dependency Parsing by Belief Propagation

Abstract: We show how to train the fast dependency parser of Smith and Eisner (2008) for improved accuracy. This parser can consider higher-order interactions among edges while retaining O(n 3 ) runtime. It outputs the parse with maximum expected recall-but for speed, this expectation is taken under a posterior distribution that is constructed only approximately, using loopy belief propagation through structured factors. We show how to adjust the model parameters to compensate for the errors introduced by this approxima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(30 citation statements)
references
References 23 publications
0
30
0
Order By: Relevance
“…The message-passing mechanism has been studied in computer software [ 35 ] and NLP [ 36 ]. Recently, Gilmer et al (2017) apply neural message-passing algorithms allowing long-range interactions between nodes in the graph [ 37 ].…”
Section: Related Workmentioning
confidence: 99%
“…The message-passing mechanism has been studied in computer software [ 35 ] and NLP [ 36 ]. Recently, Gilmer et al (2017) apply neural message-passing algorithms allowing long-range interactions between nodes in the graph [ 37 ].…”
Section: Related Workmentioning
confidence: 99%
“…, L k are the neighbors of X excluding (2) Here v 1 • v 2 denotes the Hadamard (componentwise) product, and if k = 0 an all-ones vector is returned. ley, Dredze, & Eisner, 2015). If a clause is not a polytree, then this is detected (by analyzing the literal influence graph for loops) and reported to the user.…”
Section: Differentiable Inference For One-clause Theoriesmentioning
confidence: 99%
“…Apart from this difference, training is carried out using the same procedure. Gormley et al (2015)'s approximation-aware training is conceptually related, but focuses on variational decoding procedures. Hoang et al (2017) also propose continuous relaxations of decoders, but are focused on developing better inference procedures.…”
Section: Differentiable Relaxed Decodersmentioning
confidence: 99%