Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.73
|View full text |Cite
|
Sign up to set email alerts
|

XLPT-AMR: Cross-Lingual Pre-Training via Multi-Task Learning for Zero-Shot AMR Parsing and Text Generation

Abstract: Due to the scarcity of annotated data, Abstract Meaning Representation (AMR) research is relatively limited and challenging for languages other than English. Upon the availability of English AMR dataset and English-to-X parallel datasets, in this paper we propose a novel cross-lingual pre-training approach via multi-task learning (MTL) for both zeroshot AMR parsing and AMR-to-text generation. Specifically, we consider three types of relevant tasks, including AMR parsing, AMR-to-text generation, and machine tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 55 publications
0
5
0
Order By: Relevance
“…After fine-tuning t5wtense, we see a marked improvement in performance, increasing in BERTscore by approximately 8.8% absolute (11.86% relative improvement). Current state-of-the-art cross-lingual generation (Xu et al, 2021) achieves a BERTscore of 0.8534 on the same test set, 7 which indicates that by fine-tuning on only 376 Spanish AMR annotations, we are able to achieve results close to the current best performing model. 8 The marked improvement resulting from our fine-tuning demonstrates the utility of our corpus and suggests incorporating our data into more sophisticated generation or parsing models can lead to greater improvements.…”
Section: Disagreement Analysismentioning
confidence: 80%
See 1 more Smart Citation
“…After fine-tuning t5wtense, we see a marked improvement in performance, increasing in BERTscore by approximately 8.8% absolute (11.86% relative improvement). Current state-of-the-art cross-lingual generation (Xu et al, 2021) achieves a BERTscore of 0.8534 on the same test set, 7 which indicates that by fine-tuning on only 376 Spanish AMR annotations, we are able to achieve results close to the current best performing model. 8 The marked improvement resulting from our fine-tuning demonstrates the utility of our corpus and suggests incorporating our data into more sophisticated generation or parsing models can lead to greater improvements.…”
Section: Disagreement Analysismentioning
confidence: 80%
“…An emphasis on the man's ownership of the letter elicits the :poss role, whereas emphasizing the letter's creation by the man elicits the :source role. t5wtense 0.7389 Fine-tuned t5wtense 0.8265 XLPT-AMR 0.8534 Table 3: BERTscore results for: the output of the t5wtense generation model without any fine-tuning, t5wtense after fine-tuning with our data, and the state-of-the-art XLPT-AMR cross-lingual AMR generation model (Xu et al, 2021) on our test split.…”
Section: Disagreement Analysismentioning
confidence: 99%
“…Recently, multi-lingual pre-trained models such as mBERT [9] and XLM [10] have increasingly attracted attention in various polyglot tasks [11], [12]. Mulcaire et al [13] adopted multi-lingual contextualized word embedding for structure extraction tasks, such as semantic role labeling, dependency parsing, and named-entity recognition.…”
Section: Target Expression Holdermentioning
confidence: 99%
“…Cai et al (2021b) propose to use bilingual input to enable a model to predict more accurate AMR concepts. Xu et al (2021) propose a crosslingual pretraining approach via multitask learning for AMR parsing. Cai et al (2021a) propose to use noisy knowledge distillation for multilingual AMR parsing.…”
Section: Related Workmentioning
confidence: 99%