Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1305
|View full text |Cite
|
Sign up to set email alerts
|

Zero-Shot Cross-Lingual Abstractive Sentence Summarization through Teaching Generation and Attention

Abstract: ive Sentence Summarization (AS-SUM) targets at grasping the core idea of the source sentence and presenting it as the summary. It is extensively studied using statistical models or neural models based on the large-scale monolingual source-summary parallel corpus. But there is no cross-lingual parallel corpus, whose source sentence language is different to the summary language, to directly train a cross-lingual ASSUM system. We propose to solve this zero-shot problem by using resource-rich monolingual AS-SUM sy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
66
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(68 citation statements)
references
References 26 publications
0
66
0
Order By: Relevance
“…Shen et al (2018) propose zero-shot cross-lingual headline generation to generate Chinese headlines for English articles, via a teacher-student framework, using two teacher models. Duan et al (2019) propose a similar approach for cross-lingual abstractive sentence summarization. We note that our approach is much simpler and also focuses on a different kind of summarization task.…”
Section: Related Workmentioning
confidence: 99%
“…Shen et al (2018) propose zero-shot cross-lingual headline generation to generate Chinese headlines for English articles, via a teacher-student framework, using two teacher models. Duan et al (2019) propose a similar approach for cross-lingual abstractive sentence summarization. We note that our approach is much simpler and also focuses on a different kind of summarization task.…”
Section: Related Workmentioning
confidence: 99%
“…While the MultiLing benchmark covers 40 languages, it provides relatively few examples (10k in the 2019 release). Most proposed approaches, so far, have used an extractive approach given the lack of a multilingual corpus to train abstractive models (Duan et al, 2019).…”
Section: Multilingual Text Summarizationmentioning
confidence: 99%
“…We circumvent this by replacing tags with task-specific transformer encoder layers which are added on top of the base encoder. This proposed architecture allows us to transfer supervision signals across languages and is potentially useful for other generation tasks, including question generation and sentence compression (Shen et al, 2018;Duan et al, 2019).…”
Section: Related Workmentioning
confidence: 99%