Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.597
|View full text |Cite
|
Sign up to set email alerts
|

Relation-aware Graph Attention Networks with Relational Position Encodings for Emotion Recognition in Conversations

Abstract: Interest in emotion recognition in conversations (ERC) has been increasing in various fields, because it can be used to analyze user behaviors and detect fake news. Many recent ERC methods use graph-based neural networks to take the relationships between the utterances of the speakers into account. In particular, the state-of-the-art method considers self-and inter-speaker dependencies in conversations by using relational graph attention networks (RGAT). However, graph-based neural networks do not take sequent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
74
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 144 publications
(74 citation statements)
references
References 19 publications
0
74
0
Order By: Relevance
“…We compared our model with the following baselines in our experiments: Recurrence-based methods: DialogueRNN (Majumder et al, 2019), DialogRNN-RoBERTa , and COSMIC without external knowledge 3 . Graph-based methods: DialogurGCN (Ghosal et al, 2019), KET (Zhong et al, 2019), DialogXL (Shen et al, 2020) and RGAT (Ishiwatari et al, 2020). Feature extractor: RoBERTa (Liu et al, 2019).…”
Section: Compared Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We compared our model with the following baselines in our experiments: Recurrence-based methods: DialogueRNN (Majumder et al, 2019), DialogRNN-RoBERTa , and COSMIC without external knowledge 3 . Graph-based methods: DialogurGCN (Ghosal et al, 2019), KET (Zhong et al, 2019), DialogXL (Shen et al, 2020) and RGAT (Ishiwatari et al, 2020). Feature extractor: RoBERTa (Liu et al, 2019).…”
Section: Compared Methodsmentioning
confidence: 99%
“…We utilize only the textual modality of the above datasets for the experiments. For evaluation metrics, we follow Ishiwatari et al (2020) and Shen et al (2020) and choose micro-averaged F1 excluding the majority class (neutral) for DailyDialog and weighted-average F1 for the other datasets.…”
Section: Training and Predictionmentioning
confidence: 99%
See 1 more Smart Citation
“…To achieve this learning objective in deep models, we design syntax-aware networks that incorporate the encodings of dependency relations and POS tags into the encoding of semantics. However, we find that empirically the conventional attention-based incorporation of syntax, e.g., relational graph attention networks (Ishiwatari et al, 2020), has little effect on improving the model. One possible reason is that the learning process may be dominated by the pre-trained language models due to their strength in semantic representation learning, which leads to an overfitted model.…”
Section: Introductionmentioning
confidence: 76%
“…Lu et al (2020) use 5 https://competitions.codalab.org/ competitions/25845#results a network on a vocabulary graph, which is based on word co-occurrence information, and Huang and Carley (2019) and Xu and Yang (2019) use syntactic features as a graph. Also, there are several methods that use positional encoding into GATs (Ingraham et al, 2019;Ishiwatari et al, 2020). Our method uses GATs to consider syntactic features with positional features in combination, which is distinguishable from conventional methods.…”
Section: Related Workmentioning
confidence: 99%