2023
DOI: 10.3233/ssw230003
|View full text |Cite
|
Sign up to set email alerts
|

Using Pre-Trained Language Models for Abstractive DBPEDIA Summarization: A Comparative Study

Hamada M. Zahera,
Fedor Vitiugin,
Mohamed Ahmed Sherif
et al.

Abstract: Purpose: This study addresses the limitations of current short abstracts of DBPEDIA entities, which often lack a comprehensive overview due to their creating method (i.e., selecting the first two-three sentences from the full DBPEDIA abstracts). Methodology: We leverage pre-trained language models to generate abstractive summaries of DBPEDIA abstracts in six languages (English, French, German, Italian, Spanish, and Dutch). We performed several experiments to assess the quality of generated summaries by languag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 27 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?