2023
DOI: 10.7717/peerj-cs.1176
|View full text |Cite
|
Sign up to set email alerts
|

Abstractive text summarization of low-resourced languages using deep learning

Abstract: Background Humans must be able to cope with the huge amounts of information produced by the information technology revolution. As a result, automatic text summarization is being employed in a range of industries to assist individuals in identifying the most important information. For text summarization, two approaches are mainly considered: text summarization by the extractive and abstractive methods. The extractive summarisation approach selects chunks of sentences like source documents, while … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…In addition, this methods are direct answer to tasks arising from LRLs., mirroring a broader academic and societal interest in fostering inclusivity and diversity in digital communication systems. The issue of LRLs has emerged as a focal point of inquiry and engagement within the scientific community ( Shafiq et al, 2023 ; Karyukin et al, 2023 ; Sazzed, 2021 ; Ramponi et al, 2022 ; Farooq et al, 2023 ). Addressing the challenges posed by LRLs is seen as a pivotal step towards achieving linguistic equity in the digital domain.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, this methods are direct answer to tasks arising from LRLs., mirroring a broader academic and societal interest in fostering inclusivity and diversity in digital communication systems. The issue of LRLs has emerged as a focal point of inquiry and engagement within the scientific community ( Shafiq et al, 2023 ; Karyukin et al, 2023 ; Sazzed, 2021 ; Ramponi et al, 2022 ; Farooq et al, 2023 ). Addressing the challenges posed by LRLs is seen as a pivotal step towards achieving linguistic equity in the digital domain.…”
Section: Related Workmentioning
confidence: 99%
“…They found that AraT5 and AraGPT2 performed better than other language models. (Shafiq et al, 2023) proposed abstractive summarization of Urdu. They used multilayer encoder and single layer decoder-based transformer.…”
Section: Summarization Of Low Resource Languagementioning
confidence: 99%
“…As a result, several methodologies have integrated multi-granular information with graph models to amplify text summarization effectiveness. In the present study, a sentence-topic-word structure has been established, proving instrumental in addressing cross-sentence dependencies for both single-document and multi-document summarization tasks ( Chen, 2023 ; Mao et al, 2021 ; Shafiq et al, 2023 ). Moreover, an approach employing adaptive breadth and depth has been implemented to update nodes of varied granularity, thereby facilitating a more effective aggregation of node-specific information.…”
Section: Related Workmentioning
confidence: 99%