2021
DOI: 10.1007/s13278-021-00794-4
|View full text |Cite
|
Sign up to set email alerts
|

Towards Arabic aspect-based sentiment analysis: a transfer learning-based approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(14 citation statements)
references
References 28 publications
0
14
0
Order By: Relevance
“…Their approach used 501 Indonesian amusement park reviews for testing and reported an accuracy of 79.9% and F1 score of 73.8%. Bensoltane and Zaki [50] compared variants of BERT against bidirectional long short-term memory (BiLSTM) and conditional random field (CRF) for aspect extraction using 2265 Arabic news posts (retrieved from Facebook about the 2014 Gaza attack). They showed that the combination of BERT, bidirectional gated recurrent unit (BiGRU), and CRF achieved the highest performance for aspect term extraction with an F1 score (88%).…”
Section: Related Workmentioning
confidence: 99%
“…Their approach used 501 Indonesian amusement park reviews for testing and reported an accuracy of 79.9% and F1 score of 73.8%. Bensoltane and Zaki [50] compared variants of BERT against bidirectional long short-term memory (BiLSTM) and conditional random field (CRF) for aspect extraction using 2265 Arabic news posts (retrieved from Facebook about the 2014 Gaza attack). They showed that the combination of BERT, bidirectional gated recurrent unit (BiGRU), and CRF achieved the highest performance for aspect term extraction with an F1 score (88%).…”
Section: Related Workmentioning
confidence: 99%
“…A few works used fine-tuning BERT with linear classification for Arabic aspect polarity classification [67]. Bensoltan et al [68] , proposed Bert-BiLSTM-CRF model for AE from News dataset that outperform the previous works on this dataset.…”
Section: B Arabic Languagementioning
confidence: 99%
“…Bensoltane and Zaki [42] attempted to investigate the modeling power of BERT in aspect-extraction and aspect-category identification tasks. In addition to examining the effects of adding stronger layers on top of BERT while dealing with the ATE task.…”
Section: Related Workmentioning
confidence: 99%