Findings of the Association for Computational Linguistics: NAACL 2022 2022
DOI: 10.18653/v1/2022.findings-naacl.190
|View full text |Cite
|
Sign up to set email alerts
|

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

Abstract: Meaning Representation (AMR) implicitly involves compound semantic annotations, we hypothesize auxiliary tasks which are semantically or formally related can better enhance AMR parsing. We find that 1) Semantic role labeling (SRL) and dependency parsing (DP), would bring more performance gain than other tasks e.g. MT and summarization in the text-to-AMR transition even with much less data. 2) To make a better fit for AMR, data from auxiliary tasks should be properly "AM-Rized" to PseudoAMR before training. Kno… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Two AMR-parsing models are currently the non-ensemble SotA: AMRBART (Bai et al, 2022) and ATP (Chen et al, 2022). These are closely followed by SPRING (Bevilacqua et al, 2021), the https://github.com/blazegraph/database https://solr.apache.org/…”
Section: Amr-parsing Baselinementioning
confidence: 99%
“…Two AMR-parsing models are currently the non-ensemble SotA: AMRBART (Bai et al, 2022) and ATP (Chen et al, 2022). These are closely followed by SPRING (Bevilacqua et al, 2021), the https://github.com/blazegraph/database https://solr.apache.org/…”
Section: Amr-parsing Baselinementioning
confidence: 99%
“…Therefore, we also choose SRPING as the baseline model to apply our HCAbased approaches. Additionally, we do not take the competitive AMR parser, ATP [19], into consideration for our compared models since it employs syntactic dependency parsing and semantic role labeling as intermediate tasks to introduce extra silver training data.…”
Section: Hierarchical Clause Annotationsmentioning
confidence: 99%
“…In seq2seq-based approaches, Bevilacqua et al [15] employ the Transformer-based pretrained language model, BART [16], to address LDDs in long sentences. Among these categories, seq2seq-based approaches have become mainstream, and recent parsers [17][18][19][20] employ the seq2seq architecture with the popular codebase SPRING [15], achieving better performance. Notably, HGAN [20] integrates token-level features, syntactic dependencies (SDP), and SRL with heterogeneous graph neural networks and has become the state-ofthe-art (SOTA) in terms of removing extra silver training data, graph-categorization, and ensemble methods.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations