Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1451
|View full text |Cite
|
Sign up to set email alerts
|

Rewarding Smatch: Transition-Based AMR Parsing with Reinforcement Learning

Abstract: Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs. In addition, we also combined several AMR-to-text alignments with an attention mechanism and we supplemented the parser with pre-processed concept identification, named entities and contextualized embeddings. We achieve a highly competitive performance that is comparable to the best published results. We show an i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
71
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 62 publications
(72 citation statements)
references
References 39 publications
1
71
0
Order By: Relevance
“…Significant increases are also shown on wikification and negation, indicating the benefits of using DBpedia Spotlight API and negation detection rules in post-processing. On all other subtasks except named entities, our approach achieves competitive results to the previous best approaches (Lyu and Titov, 2018;Naseem et al, 2019), and outperforms the previous best attention-based approach (van Noord and Bos, 2017b). The difference of scores on named entities is mainly caused by anonymization methods used in preprocessing, which suggests a potential improvement by adapting the anonymization method presented in Lyu and Titov (2018) to our approach.…”
Section: Resultsmentioning
confidence: 71%
See 2 more Smart Citations
“…Significant increases are also shown on wikification and negation, indicating the benefits of using DBpedia Spotlight API and negation detection rules in post-processing. On all other subtasks except named entities, our approach achieves competitive results to the previous best approaches (Lyu and Titov, 2018;Naseem et al, 2019), and outperforms the previous best attention-based approach (van Noord and Bos, 2017b). The difference of scores on named entities is mainly caused by anonymization methods used in preprocessing, which suggests a potential improvement by adapting the anonymization method presented in Lyu and Titov (2018) to our approach.…”
Section: Resultsmentioning
confidence: 71%
“…Table 2 summarizes their SMATCH scores on the test sets of two AMR general releases. On AMR 2.0, we outperform the latest push from Naseem et al (2019) by 0.8% F1, and significantly improves Lyu and Titov (2018)'s results by 1.9% F1. Compared to the previous best attention-based approach (van Noord and Bos, 2017b), our approach shows a substantial gain of 5.3% F1, with no usage of any silver-standard training data.…”
Section: Resultsmentioning
confidence: 86%
See 1 more Smart Citation
“…The AMR setup followed Ballesteros and Al-Onaizan (2017a), which introduced new actions to segment text and derive nodes or entity sub-graphs. In addition, we use the alignments and wikification from Naseem et al (2019). Unlike previous works, we force-aligned the unaligned nodes to neighbouring words and allowed attachment to the leaf nodes of entity sub-graphs, this increased oracle Smatch from 93.7 to 98.1 and notably improved model performance.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Unlike previous works, we force-aligned the unaligned nodes to neighbouring words and allowed attachment to the leaf nodes of entity sub-graphs, this increased oracle Smatch from 93.7 to 98.1 and notably improved model performance. We therefore provide results for the Naseem et al (2019) oracle for comparison. Both previous works predict a node creation action and then a node label, or call a lemmatizer if no label is found.…”
Section: Experiments and Resultsmentioning
confidence: 99%