Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1298
|View full text |Cite
|
Sign up to set email alerts
|

Extractive Summarization of Long Documents by Combining Global and Local Context

Abstract: In this paper, we propose a novel neural singledocument extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic. We evaluate the model on two datasets of scientific papers , Pubmed and arXiv, where it outperforms previous work, both extractive and abstractive models, on ROUGE-1, ROUGE-2 and ME-TEOR scores. We also show that, consistently with our goal, the benefits of our method become stronger as we apply it … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
105
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 132 publications
(106 citation statements)
references
References 28 publications
0
105
0
1
Order By: Relevance
“…3. Multiple turns Similar to long document summarization (Xiao and Carenini, 2019), conversations with many utterances contain more information to be processed, thus harder to be summarized.…”
Section: Challenges In Dialog Summarizationmentioning
confidence: 99%
“…3. Multiple turns Similar to long document summarization (Xiao and Carenini, 2019), conversations with many utterances contain more information to be processed, thus harder to be summarized.…”
Section: Challenges In Dialog Summarizationmentioning
confidence: 99%
“…Timeline summarization task is firstly proposed by Allan and et al [2001], in this paper, they propose a method that extracts a single sentence from each event within a news topic. Later, a series of works [Yan et al, 2011b;Yan et al, 2011a;Yan et al, 2012] further investigate the timeline summarization task, and all of them are based on conventional learning method to extract sentences from the timeline data. For instance, [Yan et al, 2011b] formulate the timeline summarization task as a balanced optimization problem via iterative substitution.…”
Section: Timeline Summarizationmentioning
confidence: 99%
“…They also conduct experiments on arXiv and PubMed datasets, and their model outperforms the baseline methods on these datasets. [Xiao and Carenini, 2019] propose an extractive method for this task using both the global context of the whole document and the local context within the current topic, and this method achieves state-of-the-art performance on the previous two datasets.…”
Section: Extreme Long Document Summarizationmentioning
confidence: 99%
“…More recently, further work along this line started to incorporate discourse structures into supervised summarization with the goal to better leverage the (linguistic) structure of a document. Xiao and Carenini (2019) and Cohan et al (2018) thereby use the natural structure of scientific papers (i.e. sections) to improve the inputs of the sequence models, better encoding long documents using a structural prior.…”
Section: Discourse and Summarizationmentioning
confidence: 99%