Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1549
|View full text |Cite
|
Sign up to set email alerts
|

Syntax-Aware Aspect Level Sentiment Classification with Graph Attention Networks

Abstract: Aspect level sentiment classication aims to identify the sentiment expressed towards an aspect given a context sentence. Previous neural network based methods largely ignore the syntax structure in one sentence. In this paper, we propose a novel target-dependent graph attention network (TD-GAT) for aspect level sentiment classification, which explicitly utilizes the dependency relationship among words. Using the dependency graph, it propagates sentiment features directly from the syntactic context of an aspect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
151
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 232 publications
(152 citation statements)
references
References 31 publications
1
151
0
Order By: Relevance
“…For example, Xue ane Li [40] proposed a CNN with the gate mechanisms to learn the representations for aspects and the opinion words. Huang and Carley [41] proposed to take the aspect information into account and adopt CNN for ALSC. Xing et al [42] incorporated attention-based input layers into CNN to introduce context words information.…”
Section: A Aspect-level Sentiment Classificationmentioning
confidence: 99%
“…For example, Xue ane Li [40] proposed a CNN with the gate mechanisms to learn the representations for aspects and the opinion words. Huang and Carley [41] proposed to take the aspect information into account and adopt CNN for ALSC. Xing et al [42] incorporated attention-based input layers into CNN to introduce context words information.…”
Section: A Aspect-level Sentiment Classificationmentioning
confidence: 99%
“…They also apply pre-trained BERT and obtain new state-of-the-art results on SemEval 2014 dataset. Huang and Carley [32] propose a novel target-dependent graph attention network (TD-GAT) for ABSA by explicitly utilizing the dependency relationship among words and also demonstrate that using BERT representations further substantially boosts the performance.…”
Section: Related Work a Pre-training Language Modelsmentioning
confidence: 99%
“…From the above results, we found that the performance of these two XLNetCN models is unsteady on the conditions of different iteration numbers and capsule number settings. But generally, they both reach the best when iteration is set to 3 and the pair (cn, d c ) is (24,32). So, in the following experiments we will take 3 as the default value for the iteration number n, and (24, 32) for (cn, d c ) without further explicit mention.…”
Section: A Experiments 1: Effects Of Iterative Routing Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, in the syntactic dependency tree, the word "delicious" is closer to the target "hotpot" (see Figure 1). In addition, the use of syntactic dependency trees also helps to solve the potential ambiguity in word sequences [16]. In the simple sentence "nice beef terrible juice", nice and terrible can be used interchangeably.…”
Section: Introductionmentioning
confidence: 99%