2021
DOI: 10.1609/aaai.v35i14.17489
|View full text |Cite
|
Sign up to set email alerts
|

One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline

Abstract: In Text-to-AMR parsing, current state-of-the-art semantic parsers use cumbersome pipelines integrating several different modules or components, and exploit graph recategorization, i.e., a set of content-specific heuristics that are developed on the basis of the training set. However, the generalizability of graph recategorization in an out-of-distribution setting is unclear. In contrast, state-of-the-art AMR-to-Text generation, which can be seen as the inverse to parsing, is based on simpler seq2seq. In this p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
52
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 72 publications
(52 citation statements)
references
References 35 publications
0
52
0
Order By: Relevance
“…Abstract Meaning Representation (AMR) parsing is the task of capturing the semantics of a sentence through a rooted directed acyclic graph, with nodes representing concepts and edges representing their relations (Banarescu et al 2013). USeA offers a multilingual version of SPRING (Bevilacqua et al 2021a), a recent state-of-the-art, end-to-end system for Text-to-AMR generation.…”
Section: Methodsmentioning
confidence: 99%
“…Abstract Meaning Representation (AMR) parsing is the task of capturing the semantics of a sentence through a rooted directed acyclic graph, with nodes representing concepts and edges representing their relations (Banarescu et al 2013). USeA offers a multilingual version of SPRING (Bevilacqua et al 2021a), a recent state-of-the-art, end-to-end system for Text-to-AMR generation.…”
Section: Methodsmentioning
confidence: 99%
“…Thanks to the flexibility of seq2seq learning, these models can be easily adapted to different tasks, including sequence and token classification or sequence generation, inter alia. Interestingly, generative models have also been employed in tasks that are not usually formulated as sequence-to-sequence learning; for example, there have been effective applications of seq2seq architectures to definition modeling (Bevilacqua et al, 2020), cross-lingual Abstract Meaning Representation (Blloshmi et al, 2020), end-to-end Semantic Role Labeling (Blloshmi et al, 2021) and Semantic Parsing (Procopio et al, 2021;Bevilacqua et al, 2021a).…”
Section: Lexical Substitution Resourcesmentioning
confidence: 99%
“…We present SARA, a Semantic-graph-based pre-trAining fRamework for diAlogues, aiming to endow a pre-trained dialogue model with a stronger ability to infer semantic structures from conversations by using explicit semantic structures for more fine-grained supervisions. In particular, we exploit the abstract meaning representation (AMR; Banarescu et al 2013), a fine-grained deep structure widely adopted in semantic parsing (Lyu and Titov, 2018;Cai and Lam, 2020;Bevilacqua et al, 2021; and generation (Konstas et al, 2017;Song et al, 2018;Zhu et al, 2019;Bai et al, 2020;Ribeiro et al, 2021). As shown in Figure 1, AMR represents a sentence using a rooted directed graph, highlighting the core semantic units (e.g., "police", "hum", "boy") in a sentence and connecting them with semantic relations (e.g., ":arg0", ":time").…”
Section: Introductionmentioning
confidence: 99%