2019
DOI: 10.1609/aaai.v33i01.33019987
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Task Learning Framework for Abstractive Text Summarization

Abstract: We propose a Multi-task learning approach for Abstractive Text Summarization (MATS), motivated by the fact that humans have no difficulty performing such task because they have the capabilities of multiple domains. Specifically, MATS consists of three components: (i) a text categorization model that learns rich category-specific text representations using a bi-LSTM encoder; (ii) a syntax labeling model that learns to improve the syntax-aware LSTM decoder; and (iii) an abstractive text summarization model that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 2 publications
(5 reference statements)
0
4
0
Order By: Relevance
“…In RL experiments, we train using BART from scratch, as opposed to using a model already fine-tuned on answer summarization, as we found that this model better learned to follow the given rewards. Following similar ratios in Lu et al (2019), we set (γ rl ,γ ml ) = (0.9, 0.1). Hyperparameters are tuned on the validation set.…”
Section: Rewardsmentioning
confidence: 99%
“…In RL experiments, we train using BART from scratch, as opposed to using a model already fine-tuned on answer summarization, as we found that this model better learned to follow the given rewards. Following similar ratios in Lu et al (2019), we set (γ rl ,γ ml ) = (0.9, 0.1). Hyperparameters are tuned on the validation set.…”
Section: Rewardsmentioning
confidence: 99%
“…Abstractive summarization has been enhanced in multitask learning frameworks with one additional task, by integrating it with text entailment generation (Pasunuru et al, 2017), extractive summarization (Chen et al, 2019;Hsu et al, 2018), and sentiment classification (Chan et al, 2020;Ma et al, 2018). While other research has combined multiple tasks, Lu et al (2019) integrated only predictive tasks, while Guo et al (2018) used only generative tasks. Recently, Dou and Neubig (2021) proposed using different tasks as guiding signals.…”
Section: Related Workmentioning
confidence: 99%
“…Abstractive summarization has been enhanced in multitask learning frameworks with one additional task, by integrating it with text entailment generation (Pasunuru et al, 2017), extractive summarization (Chen et al, 2019;Hsu et al, 2018), and sentiment classification (Chan et al, 2020;Ma et al, 2018). While other research has combined multiple tasks, Lu et al (2019) integrated only predictive tasks, while Guo et al (2018) used only generative tasks. Recently, Dou and Neubig (2021) proposed using different tasks as guiding signals.…”
Section: Related Workmentioning
confidence: 99%