Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.301
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Constituency Parsing by Pointing

Abstract: We propose a novel constituency parsing model that casts the parsing problem into a series of pointing tasks. Specifically, our model estimates the likelihood of a span being a legitimate tree constituent via the pointing score corresponding to the boundary words of the span. Our parsing model supports efficient top-down decoding and our learning objective is able to enforce structural consistency without resorting to the expensive CKY inference. The experiments on the standard English Penn Treebank parsing ta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 8 publications
(22 citation statements)
references
References 24 publications
0
22
0
Order By: Relevance
“…Train Valid Test From the results in Table 4, we see that our model achieves the highest F1 in French, Hungarian and Korean and higher than the best baseline by 0.06, 0.15 and 0.13, respectively. Our method also rivals existing SoTA methods on other languages even though some of them use predicted POS tags (Nguyen et al, 2020) or bigger models (75M parameters) (Kitaev and Klein, 2018). Meanwhile, our model is smaller (31M), uses no extra information and runs 40% faster.…”
Section: Languagementioning
confidence: 84%
See 4 more Smart Citations
“…Train Valid Test From the results in Table 4, we see that our model achieves the highest F1 in French, Hungarian and Korean and higher than the best baseline by 0.06, 0.15 and 0.13, respectively. Our method also rivals existing SoTA methods on other languages even though some of them use predicted POS tags (Nguyen et al, 2020) or bigger models (75M parameters) (Kitaev and Klein, 2018). Meanwhile, our model is smaller (31M), uses no extra information and runs 40% faster.…”
Section: Languagementioning
confidence: 84%
“…At each step t, the decoder autoregressively predicts the split point k t in the input by conditioning on the current input span (i t , j t ) and previous splitting decisions (i, j) ) k) <t . This conditional splitting formulation (decision at step t depends on previous steps) can help our model to find better trees compared to non-conditional top-down parsers (Stern et al, 2017a;Shen et al, 2018;Nguyen et al, 2020), thus bridging the gap between the global (but expensive) and the local (but efficient) models. The labels L(T ) can be modeled by using a label classifier, as described later in the next section.…”
Section: Seq2seq Parsing Frameworkmentioning
confidence: 99%
See 3 more Smart Citations