Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing 2017
DOI: 10.18653/v1/d17-1221
|View full text |Cite
|
Sign up to set email alerts
|

Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization

Abstract: When people recall and digest what they have read for writing summaries, the important content is more likely to attract their attention. Inspired by this observation, we propose a cascaded attention based unsupervised model to estimate the salience information from the text for compressive multi-document summarization. The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience. By adding sparsity constraints on the number of output … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(26 citation statements)
references
References 26 publications
0
26
0
Order By: Relevance
“…The third group in the table shows the performance of autoencoder-based neural approaches. C-ATTENTION (Li et al, 2017a) 4 presents different variants of our query-focused summarizer which we call QUERYSUM. We show automatic results with distant supervision based on isolated Sentences (QUERYSUM S ), Passages (QUERYSUM P ), and an ensemble model (QUERYSUM S+P ) which combines both.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The third group in the table shows the performance of autoencoder-based neural approaches. C-ATTENTION (Li et al, 2017a) 4 presents different variants of our query-focused summarizer which we call QUERYSUM. We show automatic results with distant supervision based on isolated Sentences (QUERYSUM S ), Passages (QUERYSUM P ), and an ensemble model (QUERYSUM S+P ) which combines both.…”
Section: Resultsmentioning
confidence: 99%
“…More recently, estimate the salience of text units within a sparsecoding framework by additionally taking into account reader comments (associated with news reports). Li et al (2017a) use a cascaded neural attention model to find salient sentences, whereas in follow-on work Li et al (2017b) employ a generative model which maps sentences to a latent semantic space while a reconstruction model estimates sentence salience. There are also feature-based approaches achieving good results by optimizing sentence selection under a summary length constraint (Feigenblat et al, 2017).…”
Section: Related Workmentioning
confidence: 99%
“…They extract salient noun and verb phrases from a constituency tree, then produce sentences with representative phrases via integer linear programming. Later, Li et al (2017b;2017c) adopt a similar two-stage model, but they first estimate sentence and phrase salience via an auto-encoder framework.…”
Section: Abstractive Methodsmentioning
confidence: 99%
“…Neural abstractive summarization models have been studied in the past [46,185,228] and later extended by source copying [170,236], reinformcement learning [207], and sentence salience information [154]. One model variant of Nallapati et al [185] is related to our model in using sentence-level information in attention, however, our model is different in encoding the document using a hierarchical encoder, using discourse sections in the decoding step, and utilizing a coverage mechanism.…”
Section: Related Workmentioning
confidence: 99%