2022
DOI: 10.1109/access.2022.3159252
|View full text |Cite
|
Sign up to set email alerts
|

Arabic Aspect Extraction Based on Stacked Contextualized Embedding With Deep Learning

Abstract: The exponential growth of the internet and a multi-fold increase in social media users in the last decade have resulted in a massive growth of unstructured data. Aspect-Based Sentiment Analysis (ABSA) is challenging because it performs a fine-grain analysis; it is a text analysis technique where the opinions group is based on the aspect. The Aspect Extraction (AE) task is one of the core subtasks of ABSA; it helps to identify aspect terms in the text, comments, or reviews. The challenge of the Arabic AE task i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 58 publications
0
11
0
Order By: Relevance
“…Note none of the previous models for Arabic worked on multitask learning, so the comparisons were with a single task. As shown in Table 7, AR-LCF-ATEPC-Fusion achieved the best results with an F1 score of 75.94% for the ATE task, thereby outperforming all comparison models (except our previous single-task ATE method [75]). For the APC task, AR-LCF-ATEPC-Fusion outperformed all comparison models with an accuracy of 91.5% and an F1 score of 76.74%, improving the accuracy by 2%.…”
Section: Performance Of Proposed Multi-task Model On the Original Dat...mentioning
confidence: 85%
See 1 more Smart Citation
“…Note none of the previous models for Arabic worked on multitask learning, so the comparisons were with a single task. As shown in Table 7, AR-LCF-ATEPC-Fusion achieved the best results with an F1 score of 75.94% for the ATE task, thereby outperforming all comparison models (except our previous single-task ATE method [75]). For the APC task, AR-LCF-ATEPC-Fusion outperformed all comparison models with an accuracy of 91.5% and an F1 score of 76.74%, improving the accuracy by 2%.…”
Section: Performance Of Proposed Multi-task Model On the Original Dat...mentioning
confidence: 85%
“…To validate the effectiveness of multi-task model, we compared the best multi-task model (AR-LCF-ATEPC-Fusion) with state-of-the-art Deep-based and transformer-based approaches that used the same benchmark dataset: RNN-BiLSTM-CRF [69], BiGRU [70], attention mechanism with neural network [71], BERT [72], and Bert-Flair-BiLSTM/BiGRU-CRF [75], Sequence to Sequence mode for preprocessing and BERT for classification (Seq-seq BERT) [76] and BERT with liner layer (Bert-linerpair) [77]. The results demonstrated that LCF-ATEPC model outperformed other comparable models.…”
Section: Performance Of Proposed Multi-task Model On the Original Dat...mentioning
confidence: 99%
“…Fadel et al [43] proposed the BF-BiLSTM-CRF model based on BERT to handle the target extraction task. They combined contextualized string embedding with the BERT language model to improve word representation as embedding layer, and on top of it they stacked two Bilstm layers with crf layer as output layer.…”
Section: Related Workmentioning
confidence: 99%
“…Intelligent education is an educational model on the campus, which provides convenience for students' study and life and improves children's ability to explore and understand new things and unknown worlds [9][10]. Now many colleges and universities have begun to use this teaching model to carry out curriculum reform.…”
Section: Intelligent Educationmentioning
confidence: 99%