Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.19
|View full text |Cite
|
Sign up to set email alerts
|

Beta Distribution Guided Aspect-aware Graph for Aspect Category Sentiment Analysis with Affective Knowledge

Abstract: In this paper, we investigate the Aspect Category Sentiment Analysis (ACSA) task from a novel perspective by exploring a Beta Distribution guided aspect-aware graph construction based on external knowledge. That is, we are no longer entangled about how to laboriously search the sentiment clues for coarsegrained aspects from the context, but how to preferably find the words highly sentimentrelated to the aspects in the context and determine their importance based on the public knowledge base, so as to naturally… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…Liang et al [12] used beta distributions to infer aspect-weighted values for each aspect-weighted word, and their aspect-weighted graph construction took into account entity effects on attribute sentiment, effectively improving accuracy in this task. To enhance the effectiveness of graph convolutional networks in extracting aspectual category emotions from sentence information, it is suggested to consider the impact of entities and incorporate a multi-head attention convolution fusion graph convolutional network for deeper feature extraction.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Liang et al [12] used beta distributions to infer aspect-weighted values for each aspect-weighted word, and their aspect-weighted graph construction took into account entity effects on attribute sentiment, effectively improving accuracy in this task. To enhance the effectiveness of graph convolutional networks in extracting aspectual category emotions from sentence information, it is suggested to consider the impact of entities and incorporate a multi-head attention convolution fusion graph convolutional network for deeper feature extraction.…”
Section: Related Workmentioning
confidence: 99%
“…AAGCN-BERT [12] : This paper explores the use of beta distributions to infer aspect weights for each aspect word in external knowledge-based guided aspect graph construction. From the experimental results, we can conclude that the ACMAPGCNN model is slightly lower than the AAGCN-BERT model, except for the lap16 dataset.…”
Section: Benchmark Modelmentioning
confidence: 99%
“…Recent studies in COSC have utilized various techniques, such as text generation frameworks (Liu et al, 2021), aspect-aware graphs (Liang et al, 2021), and auxiliary sentence construction methods using BERT (Sun et al, 2019). Additionally, other approaches include aspect-aware LSTM models (Xing et al, 2019), attentive LSTM models that embed commonsense knowledge (Saeidi et al, 2016;Ma et al, 2018) introducing the SentiHood dataset, and hierarchical models (Ruder et al, 2016).…”
Section: Category-oriented Sentiment Classification (Cosc)mentioning
confidence: 99%
“…Models based on graph neural networks (GNN), including graph convolutional network (GCN) (Kipf and Welling, 2017) and graph attention network (GAT) (Velickovic et al, 2018), have achieved promising performance in many recent research studies, such as visual representation learning (Wu et al, 2019;Xie et al, 2021), text representation learning (Yao et al, 2019;Lou et al, 2021;Liang et al, 2021bLiang et al, , 2022, and recommendation systems (Ying et al, 2018;Tan et al, 2020). Further, there are also some research studies explored graph models to deal with the multi-modal tasks, such as multi-modal sentiment detection (Yang et al, 2021), multi-modal named entity recognition , cross-modal video moment retrieval (Zeng et al, 2021), multi-modal neu-ral machine translation (Yin et al, 2020), and multimodal sarcasm detection (Liang et al, 2021a).…”
Section: Graph Neural Networkmentioning
confidence: 99%
“…Here, the nodes of the crossmodal graph are the representations of text and image modalities. Many GCN-based approaches have demonstrated that the weights of the edges are crucial in graph information aggregation (Liang et al, 2021b;Yang et al, 2021;Lou et al, 2021). As such, constructing a cross-modal graph boils down to the setting of the edge weights in the graph.…”
Section: Cross-modal Graphmentioning
confidence: 99%