2020
DOI: 10.1007/s42979-020-00270-4
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Opinion Targets Using Attention-Based Neural Model

Abstract: Extracting opinion-target expression is a core subtask to perform aspect-based sentiment analysis which aims to identify the discussed aspects within a text associated with their opinion targets and classify the sentiment as positive, negative, or neutral. This paper proposes a deep learning model to tackle the opinion-target expression extraction task. The proposed model is composed of bidirectional long short-term memory as an encoder, long short-term memory as a decoder with an attention mechanism, and cond… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…We compare the proposed models with baseline [6] and previous models that used traditional deep learning, RNN [63], BiLSTM-CRF with Word2vec/fastText as word embedding [64], and an attention-based neural model [65].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compare the proposed models with baseline [6] and previous models that used traditional deep learning, RNN [63], BiLSTM-CRF with Word2vec/fastText as word embedding [64], and an attention-based neural model [65].…”
Section: Resultsmentioning
confidence: 99%
“…The result showed enhancement over baseline research on both tasks (39% for task1 and 6% for task2). In [65], the authors used an attention mechanism for AE, and the performance improved with a 72.8 F-score. In [66], a BiGRU was used.…”
Section: B Arabic Languagementioning
confidence: 99%
“…To validate the effectiveness of multi-task model, we compared the best multi-task model (AR-LCF-ATEPC-Fusion) with state-of-the-art Deep-based and transformer-based approaches that used the same benchmark dataset: RNN-BiLSTM-CRF [69], BiGRU [70], attention mechanism with neural network [71], BERT [72], and Bert-Flair-BiLSTM/BiGRU-CRF [75], Sequence to Sequence mode for preprocessing and BERT for classification (Seq-seq BERT) [76] and BERT with liner layer (Bert-linerpair) [77]. The results demonstrated that LCF-ATEPC model outperformed other comparable models.…”
Section: Performance Of Proposed Multi-task Model On the Original Dat...mentioning
confidence: 99%
“…Most studies involved in Arabic ABSA evaluate the ATE and ASC subtasks independently, ignoring the relatedness and dependency of the two subtasks [2,3,4,5,6,7]. Some studies are either only extracting aspects from a given sentence (ATE) [8,9,10] or predicting sentiment polarities (ASC) assuming that aspect entities are preidentified input features to the model, which is not the case in real-world scenario [11,12,13,14].…”
Section: Introductionmentioning
confidence: 99%