Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2022
DOI: 10.18653/v1/2022.acl-short.37
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Curriculum Learning for AMR Parsing

Abstract: Meaning Representation (AMR) parsing aims to translate sentences to semantic representation with a hierarchical structure, and is recently empowered by pretrained sequenceto-sequence models. However, there exists a gap between their flat training objective (i.e., equally treats all output tokens) and the hierarchical AMR structure, which limits the model generalization. To bridge this gap, we propose a Hierarchical Curriculum Learning (HCL) framework with Structure-level (SC) and Instance-level Curricula (IC).… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 31 publications
0
9
0
Order By: Relevance
“…AMR Parsing AMR parsing is a challenging semantic parsing task, since AMR is a deep semantic representation and consists of many separate annotations (Banarescu et al, 2013) (e.g., semantic relations, named entities, co-reference and so on). There are four major methods to do AMR Parsing currently, sequence-to-sequence approaches (Ge et al, 2019;Xu et al, 2020;Bevilacqua et al, 2021;Wang et al, 2022), tree-based approaches (Zhang et al, 2019b,a), graph-based approaches (Lyu and Titov, 2018;Cai and Lam, 2020) and transitionbased approaches (Naseem et al, 2019;Lee et al, 2020;Zhou et al, 2021a).…”
Section: Related Workmentioning
confidence: 99%
“…AMR Parsing AMR parsing is a challenging semantic parsing task, since AMR is a deep semantic representation and consists of many separate annotations (Banarescu et al, 2013) (e.g., semantic relations, named entities, co-reference and so on). There are four major methods to do AMR Parsing currently, sequence-to-sequence approaches (Ge et al, 2019;Xu et al, 2020;Bevilacqua et al, 2021;Wang et al, 2022), tree-based approaches (Zhang et al, 2019b,a), graph-based approaches (Lyu and Titov, 2018;Cai and Lam, 2020) and transitionbased approaches (Naseem et al, 2019;Lee et al, 2020;Zhou et al, 2021a).…”
Section: Related Workmentioning
confidence: 99%
“…In seq2seq-based approaches, Bevilacqua et al [15] employ the Transformer-based pretrained language model, BART [16], to address LDDs in long sentences. Among these categories, seq2seq-based approaches have become mainstream, and recent parsers [17][18][19][20] employ the seq2seq architecture with the popular codebase SPRING [15], achieving better performance. Notably, HGAN [20] integrates token-level features, syntactic dependencies (SDP), and SRL with heterogeneous graph neural networks and has become the state-ofthe-art (SOTA) in terms of removing extra silver training data, graph-categorization, and ensemble methods.…”
Section: Introductionmentioning
confidence: 99%
“…However, these AMR parsers still suffer performance degradation when encountering long sentences with deeper AMR graphs [18,20] that introduce most LDD cases. We argue that the complexity of the clausal structure inside a sentence is the essence of LDDs, where clauses are the core units of grammar and center on a verb that determines the occurrences of other constituents [21].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations