Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1170
|View full text |Cite
|
Sign up to set email alerts
|

AMR dependency parsing with a typed semantic algebra

Abstract: We present a semantic parser for Abstract Meaning Representations which learns to parse strings into tree representations of the compositional structure of an AMR graph. This allows us to use standard neural techniques for supertagging and dependency tree parsing, constrained by a linguistically principled type system. We present two approximative decoding algorithms, which achieve state-of-the-art accuracy and outperform strong baselines. Related WorkRecently, AMR parsing has generated considerable research a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
106
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(107 citation statements)
references
References 36 publications
(50 reference statements)
1
106
0
Order By: Relevance
“…Corpus Parser F1(%) AMR 2.0 Buys and Blunsom (2017) 61.9 van Noord and Bos (2017b) 71.0 * Groschwitz et al (2018) 71.0±0.5 Lyu and Titov (2018) 74.4±0.2 Naseem et al (2019) 75.5…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Corpus Parser F1(%) AMR 2.0 Buys and Blunsom (2017) 61.9 van Noord and Bos (2017b) 71.0 * Groschwitz et al (2018) 71.0±0.5 Lyu and Titov (2018) 74.4±0.2 Naseem et al (2019) 75.5…”
Section: Resultsmentioning
confidence: 99%
“…(2) the lack of gold alignments between nodes (concepts) in the graph and words in the text which limits attempts to rely on explicit alignments to generate training data (Flanigan et al, 2014;Wang et al, 2015;Damonte et al, 2017;Foland and Martin, 2017;Peng et al, 2017b;Groschwitz et al, 2018;Guo and Lu, 2018); and (3) relatively limited amounts of labeled data (Konstas et al, 2017).…”
Section: Introductionmentioning
confidence: 99%
“…2 Since the size of AMR graph is approximately linear in the length of sentence. 2015; Foland and Martin, 2017;Lyu and Titov, 2018;Groschwitz et al, 2018;Guo and Lu, 2018). The hand-crafted rules for re-categorization are often non-trivial, requiring exhaustive screening and expert-level manual efforts.…”
Section: Introductionmentioning
confidence: 99%
“…Table 4 shows the SEMBLEU and SMATCH scores several recent models. In particular, we asked for the outputs of Lyu (Lyu and Titov, 2018), Gros (Groschwitz et al, 2018), van Nood (van Noord and Bos, 2017) and Guo (Guo and Lu, 2018) to evaluate on our SEMBLEU. For CAMR and JAMR, we obtain their outputs by running the released systems.…”
Section: Sentence-level Experimentsmentioning
confidence: 99%
“…Despite the large amount of work on AMR parsing (Flanigan et al, 2014;Artzi et al, 2015;Pust et al, 2015;Peng et al, 2015;Buys and Blunsom, 2017;Konstas et al, 2017;Wang and Xue, 2017;Ballesteros and Al-Onaizan, 2017;Lyu and Titov, 2018;Peng et al, 2018;Groschwitz et al, 2018;Guo and Lu, 2018), little attention has been paid to evaluating the parsing results, leaving SMATCH Figure 2: Average, minimal and maximal SMATCH scores over 100 runs on 100 sentences. The running time increases from 6.6 seconds (r=1) to 21.0 (r = 4).…”
Section: Introductionmentioning
confidence: 99%