2019
DOI: 10.3233/jifs-179011
|View full text |Cite
|
Sign up to set email alerts
|

Siamese hierarchical attention networks for extractive summarization

Abstract: In this paper, we present an extractive approach to document summarization based on Siamese Neural Networks. Specifically, we propose the use of Hierarchical Attention Networks to select the most relevant sentences of a text to make its summary. We train Siamese Neural Networks using document-summary pairs to determine whether the summary is appropriated for the document or not. By means of a sentence-level attention mechanism the most relevant sentences in the document can be identified. Hence, once the netwo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
27
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(28 citation statements)
references
References 17 publications
1
27
0
Order By: Relevance
“…The SHA-NN system [15] is based on Hierarchical Attention Networks (HAN) [21] trained in a Siamese way, where its left branch extracts representations for whole documents and its right branch extracts representations for summaries. HAN allows us to extract a vector representation for documents and summaries from the representations of their sentences.…”
Section: System Descriptionmentioning
confidence: 99%
See 4 more Smart Citations
“…The SHA-NN system [15] is based on Hierarchical Attention Networks (HAN) [21] trained in a Siamese way, where its left branch extracts representations for whole documents and its right branch extracts representations for summaries. HAN allows us to extract a vector representation for documents and summaries from the representations of their sentences.…”
Section: System Descriptionmentioning
confidence: 99%
“…The SHA-NN system [15] is based on addressing a binary classification problem in order to select the most relevant sentences by means of the attention mechanisms. This system, differently from some Neural Networks based mentioned works, does not require the preparation of the corpus [11], being the system which learns the alignment between document and summary.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations