2021
DOI: 10.1177/0165551521990616
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised extractive multi-document summarization method based on transfer learning from BERT multi-task fine-tuning

Abstract: Text representation is a fundamental cornerstone that impacts the effectiveness of several text summarization methods. Transfer learning using pre-trained word embedding models has shown promising results. However, most of these representations do not consider the order and the semantic relationships between words in a sentence, and thus they do not carry the meaning of a full sentence. To overcome this issue, the current study proposes an unsupervised method for extractive multi-document summarization based o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 49 publications
0
10
0
Order By: Relevance
“…In the process of transfer learning, there are two models for fine-tuning. The first method is to freeze all convolutional feature extraction layers of the pre-trained model and only perform fine-tuning training operations on the classification layer; the second method is to fine tune all convolutional feature extraction and classification layers of the pre-trained model ( Lamsiyah et al, 2023 ).…”
Section: Methodsmentioning
confidence: 99%
“…In the process of transfer learning, there are two models for fine-tuning. The first method is to freeze all convolutional feature extraction layers of the pre-trained model and only perform fine-tuning training operations on the classification layer; the second method is to fine tune all convolutional feature extraction and classification layers of the pre-trained model ( Lamsiyah et al, 2023 ).…”
Section: Methodsmentioning
confidence: 99%
“…Fine-tuning is a technique used to adjust the model to a new dataset. MBART50 is one of the pre-trained so that fine-tuning will be done [33], [34].…”
Section: Fine-tunning Modelmentioning
confidence: 99%
“…Unlike extractive summarization, which can produce poor sentences, abstractive summarization can produce grammatically correct sentences [6], [16], [17]. The abstractive method paraphrases and rearranges sentences into a summary [18], [19]. In this research, we will use the abstractive summarization method.…”
Section: Introductionmentioning
confidence: 99%
“…Supervised methods need a significant volume of data, while unsupervised techniques require no educational data [2,5,13,14,18]. Promising results are obtained by transferring learning using pre-trained word embeddings within the scope of supervised learning [19].…”
Section: Introductionmentioning
confidence: 99%