Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.324
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End AMR Corefencence Resolution

Abstract: Although parsing to Abstract Meaning Representation (AMR) has become very popular and AMR has been shown effective on many sentence-level tasks, little work has studied how to generate AMRs that can represent multi-sentence information. We introduce the first end-to-end AMR coreference resolution model in order to build multi-sentence AMRs. Compared with the previous pipeline and rule-based approaches, our model alleviates error propagation and it is more robust for both in-domain and out-domain situations. Be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

2
8

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 35 publications
0
8
0
Order By: Relevance
“…The best approach from their study is incorporated as a baseline in §5. Fu et al (2021) introduce an AMR coreference resolution system that uses graph neural network to model gold sentencelevel AMR graphs for coreference predictions. This system assumes gold graphs and is not comparable with document-level parsing systems.…”
Section: Resultsmentioning
confidence: 99%
“…The best approach from their study is incorporated as a baseline in §5. Fu et al (2021) introduce an AMR coreference resolution system that uses graph neural network to model gold sentencelevel AMR graphs for coreference predictions. This system assumes gold graphs and is not comparable with document-level parsing systems.…”
Section: Resultsmentioning
confidence: 99%
“…In particular, we explore Abstract Meaning Representation (AMR) (Banarescu et al, 2013), a semantic formalism that has received much research interest (Song et al, 2018;Guo et al, 2019;Ribeiro et al, , 2021aOpitz et al, 2020Opitz et al, , 2021Fu et al, 2021) and has been shown to benefit downstream tasks such as spoken language understanding (Damonte et al, 2019), machine translation (Song et al, 2019), commonsense reasoning (Lim et al, 2020), and question answering (Kapanipathi et al, 2021;Bornea et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…In particular, we explore Abstract Meaning Representation (AMR) (Banarescu et al, 2013), a semantic formalism that has received much research interest (Song et al, 2018;Guo et al, 2019;Ribeiro et al, , 2021aOpitz et al, 2020Opitz et al, , 2021Fu et al, 2021) and has been shown to benefit downstream tasks such as spoken language understanding (Damonte et al, 2019), machine translation (Song et al, 2019), commonsense reasoning (Lim et al, 2020), and question answering (Kapanipathi et al, 2021;Bornea et al, 2021).…”
Section: Introductionmentioning
confidence: 99%