2022
DOI: 10.1007/978-3-031-06458-6_13
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Text Summarization for Moroccan Arabic Dialect Using an Artificial Intelligence Approach

Kamel Gaanoun,
Abdou Mohamed Naira,
Anass Allak
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…This representation is fed to the encoder with BILSTM to generate the sentence representations of the sentence. Finally, using a pre-trained Named Entity Recognition (NER) model called "Marefa-NLP/marefa-ner" (Gaanoun, et al 2022), we labeled some information from the extractive summary as each item was classi ed into one of the nine categories (person, location, organization, nationality, job, product, event, time, and artwork). This model used an auto tokenizer to tokenize sentences with some hyper-parameters like padding = true, truncation = true, and return tensors = "p" to give weight to sentences and sentence position and to get the relationships among sentences to represent hidden states.…”
Section: Sequence To Sequence Modelmentioning
confidence: 99%
“…This representation is fed to the encoder with BILSTM to generate the sentence representations of the sentence. Finally, using a pre-trained Named Entity Recognition (NER) model called "Marefa-NLP/marefa-ner" (Gaanoun, et al 2022), we labeled some information from the extractive summary as each item was classi ed into one of the nine categories (person, location, organization, nationality, job, product, event, time, and artwork). This model used an auto tokenizer to tokenize sentences with some hyper-parameters like padding = true, truncation = true, and return tensors = "p" to give weight to sentences and sentence position and to get the relationships among sentences to represent hidden states.…”
Section: Sequence To Sequence Modelmentioning
confidence: 99%
“…This representation is fed to the encoder with BILSTM to generate the sentence representations of the sentence. Finally, using a pre-trained Named Entity Recognition (NER) model called "Marefa-NLP/marefa-ner" (Gaanoun, et al 2022), we labeled some information from the extractive summary as each item was classified into one of the nine categories (person, location, organization, nationality, job, product, event, time, and artwork). This model used an auto tokenizer to tokenize sentences with some hyper-parameters like padding = true, truncation = true, and return tensors = "p" to give weight to sentences and sentence position and to get the relationships among sentences to represent hidden states.…”
Section: Encodermentioning
confidence: 99%
“…In Abstractive Summarization, the primary challenge lies in obtaining labeled data, as it necessitates documents along with their summaries for model training. Many existing works employing this method rely on press articles [10,12,28,30]. This approach typically involves treating the articles and their headlines as the summaries.…”
Section: Text Summarizationmentioning
confidence: 99%
“…Our proposed architecture draws inspiration from the works in [10,14,29]. It follows the pipeline outlined in Figure 4 and is structured as follows:…”
Section: Text Summarizationmentioning
confidence: 99%
See 1 more Smart Citation