Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.145
|View full text |Cite
|
Sign up to set email alerts
|

Discrete Opinion Tree Induction for Aspect-based Sentiment Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 61 publications
(25 citation statements)
references
References 0 publications
0
25
0
Order By: Relevance
“…We use IARM (Majumder et al 2018), MIAD (Hazarika et al 2018), StageI+StageII (Ma et al 2019), CDT (Sun et al 2019) as baselines for their modeling aspect-related information using LSTMs. We also use Joint+PRET (Zhou et al 2020), RepWalk (Zheng et al 2020), BERT-SPC (Song et al 2019), CapsNet (Jiang et al 2019), SDGCN (Zhao, Hou, and Wu 2020), InterGCN (Liang et al 2020), R-GAT (Wang et al 2020), T-GCN (Tian, Chen, and Song 2021), RGAT (Bai, Liu, and Zhang 2021), RMN (Zeng et al 2022), dot-GCN (Chen et al 2022), CHGMAN (Niu et al 2022) and APSCL (Li, Li, and Xiao 2023) as baselines, as they focus on inter-aspect relations or aspect-oriented tree structure using GCNs and BERT.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…We use IARM (Majumder et al 2018), MIAD (Hazarika et al 2018), StageI+StageII (Ma et al 2019), CDT (Sun et al 2019) as baselines for their modeling aspect-related information using LSTMs. We also use Joint+PRET (Zhou et al 2020), RepWalk (Zheng et al 2020), BERT-SPC (Song et al 2019), CapsNet (Jiang et al 2019), SDGCN (Zhao, Hou, and Wu 2020), InterGCN (Liang et al 2020), R-GAT (Wang et al 2020), T-GCN (Tian, Chen, and Song 2021), RGAT (Bai, Liu, and Zhang 2021), RMN (Zeng et al 2022), dot-GCN (Chen et al 2022), CHGMAN (Niu et al 2022) and APSCL (Li, Li, and Xiao 2023) as baselines, as they focus on inter-aspect relations or aspect-oriented tree structure using GCNs and BERT.…”
Section: Baselinesmentioning
confidence: 99%
“…3) Aspect-oriented tree construction. In addition to using existing dependency trees to model relational information, some researchers transform dependency trees into aspect-oriented tree structures to enhance GCN structure and better capture aspect-related information (Wang et al 2020;Zhou et al 2021;Chen et al 2022). For example, Wang et al (2020) reshaped and pruned a dependency parsing tree to construct an aspect-oriented tree to capture aspect-related information; Chen et al (2022) used reinforcement learning and regularization to induce discrete opinion trees to shorten the distance of corresponding opinion words.…”
Section: Introductionmentioning
confidence: 99%
“…We follow Chen et al (2017) and use Accuracy and Macro-F1 score as evaluation metrics. We compare our approach with state-of-the-art models that require training, namely DGEDT (Tang et al 2020) and dotGCN (Chen et al 2022).…”
Section: Aspect-based Sentiment Analysismentioning
confidence: 99%
“…Models. The TSA models that we benchmark on METS-CoV-TSA could be classified into 4 categories: 1 statistical machine learning model: SVM (Vo and Zhang, 2015); 7 traditional neural network models: ASGCN , LSTM (Hochreiter and Schmidhuber, 1997), TD-LSTM (Tang et al, 2016b), MemNet (Tang et al, 2016a), IAN (Ma et al, 2017), MGAN (Fan et al, 2018) and TNet-LF ; 6 general domain PLM (BERT-base-uncased): AEN , LCF (Zeng et al, 2019), BERT-SPC (Devlin et al, 2019), depGCN , kumaGCN (Chen et al, 2020b) and dotGCN (Chen et al, 2022); and 4 models (BERT-SPC, depGCN , kumaGCN and dotGCN) with COVID-19 related PLM (COVID-TWITTER-BERT).…”
Section: Targeted Sentiment Analysismentioning
confidence: 99%