Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1043
|View full text |Cite
|
Sign up to set email alerts
|

Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks

Abstract: We present a system which parses sentences into Abstract Meaning Representations, improving state-of-the-art results for this task by more than 5%. AMR graphs represent semantic content using linguistic properties such as semantic roles, coreference, negation, and more. The AMR parser does not rely on a syntactic preparse, or heavily engineered features, and uses five recurrent neural networks as the key architectural components for inferring AMR graphs.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
40
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(41 citation statements)
references
References 26 publications
1
40
0
Order By: Relevance
“…Our results are averages over 4 runs with 95% confidence intervals (JAMR-style baselines are single runs). On the 2015 dataset, our best models (local + projective, K&G + fixed-tree) outperform all previous work, with the exception of the Foland and Martin (2017) model; on the 2017 set we match state of the art results (though note that van Noord and Bos (2017b) use 100k additional sentences of silver data). The fixed-tree decoder seems to work well with either edge model, but performance of the projective decoder drops with the K&G edge scores.…”
Section: Comparison To Baselinessupporting
confidence: 55%
See 2 more Smart Citations
“…Our results are averages over 4 runs with 95% confidence intervals (JAMR-style baselines are single runs). On the 2015 dataset, our best models (local + projective, K&G + fixed-tree) outperform all previous work, with the exception of the Foland and Martin (2017) model; on the 2017 set we match state of the art results (though note that van Noord and Bos (2017b) use 100k additional sentences of silver data). The fixed-tree decoder seems to work well with either edge model, but performance of the projective decoder drops with the K&G edge scores.…”
Section: Comparison To Baselinessupporting
confidence: 55%
“…Table 2 analyzes the performance of our two best systems (PD = projective, FTD = fixed-tree) in more detail, using the categories of Damonte et al (2017), and compares them to Wang's, Flanigan's, and Damonte's AMR parsers on the 2015 set and , and van Noord and Bos (2017b) for the 2017 dataset. (Foland and Martin (2017) did not publish such results.) The good scores we achieve on reentrancy identification, despite removing a large amount of reentrant edges from the training data, indicates that our elementary as-graphs successfully encode phenomena such as control and coordination.…”
Section: Resultsmentioning
confidence: 98%
See 1 more Smart Citation
“…In order to deal with these cases, we re-categorized AMR concepts. Similar recategorization strategies have been used in previous work (Foland and Martin, 2017;Peng et al, 2017).…”
Section: Introductionmentioning
confidence: 98%
“…2 Since the size of AMR graph is approximately linear in the length of sentence. 2015; Foland and Martin, 2017;Lyu and Titov, 2018;Groschwitz et al, 2018;Guo and Lu, 2018). The hand-crafted rules for re-categorization are often non-trivial, requiring exhaustive screening and expert-level manual efforts.…”
Section: Introductionmentioning
confidence: 99%