2022
DOI: 10.48550/arxiv.2203.10945
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AraBART: a Pretrained Arabic Sequence-to-Sequence Model for Abstractive Summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Recall, precision, and F-measure all improved to 72.94, 68.75, and 67.99. In (Eddine et al 2022) AraBart was the first sequence-to-sequence pre-trained Arabic model. Their model was tested on abstractive summarization tasks at various abstraction levels.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Recall, precision, and F-measure all improved to 72.94, 68.75, and 67.99. In (Eddine et al 2022) AraBart was the first sequence-to-sequence pre-trained Arabic model. Their model was tested on abstractive summarization tasks at various abstraction levels.…”
Section: Literature Reviewmentioning
confidence: 99%
“…BERTScore [8] is yet another metric that computes a similarity score between the system sentence and the reference sentence based on pre-trained BERT contextual embeddings. AraBART model [9] evaluated using BERTScore.…”
Section: B Evaluating Summariesmentioning
confidence: 99%
“…It has been used to evaluate various abstractive summarization models [3,5,9,[11][12][13], covering different techniques such as deep learning and graph-based models [14,15].…”
Section: B Evaluating Summariesmentioning
confidence: 99%