Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.231
|View full text |Cite
|
Sign up to set email alerts
|

Multilingual AMR-to-Text Generation

Abstract: Generating text from structured data is challenging because it requires bridging the gap between (i) structure and natural language (NL) and (ii) semantically underspecified input and fully specified NL output. Multilingual generation brings in an additional challenge: that of generating into languages with varied word order and morphological properties. In this work, we focus on Abstract Meaning Representations (AMRs) as structured input, where previous research has overwhelmingly focused on generating only i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(27 citation statements)
references
References 43 publications
0
27
0
Order By: Relevance
“…The results also show that our XLPT-AMR T-S models greatly advance the state of art. For example, our XLPT-AMR T-S models outperform Sheth et al (2021) by 3.4∼7.8 Smatch F1 on AMR parsing of the three languages while surpass Fan and Gardent (2020) by around 10 BLEU on AMR-to-text generation.…”
Section: Resultsmentioning
confidence: 60%
See 4 more Smart Citations
“…The results also show that our XLPT-AMR T-S models greatly advance the state of art. For example, our XLPT-AMR T-S models outperform Sheth et al (2021) by 3.4∼7.8 Smatch F1 on AMR parsing of the three languages while surpass Fan and Gardent (2020) by around 10 BLEU on AMR-to-text generation.…”
Section: Resultsmentioning
confidence: 60%
“…Finally, we compare our approach to the previous studies. Among them, both Blloshmi et al (2020) and Fan and Gardent (2020) adopt pretrained models which cover either the encoder part, or the decoder part. From the results we can see even our baseline Baseline pre-trained outperforms them by pre-training the encoder and the decoder simultaneously.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations