Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis 2018
DOI: 10.18653/v1/w18-6243
|View full text |Cite
|
Sign up to set email alerts
|

Emo2Vec: Learning Generalized Emotion Representation by Multi-task Training

Abstract: In this paper, we propose Emo2Vec which encodes emotional semantics into vectors. We train Emo2Vec by multi-task learning six different emotion-related tasks, including emotion/sentiment analysis, sarcasm classification, stress detection, abusive language classification, insult detection, and personality recognition. Our evaluation of Emo2Vec shows that it outperforms existing affect-related representations, such as Sentiment-Specific Word Embedding and DeepMoji embeddings with much smaller training corpora. W… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 59 publications
(41 citation statements)
references
References 35 publications
0
36
0
Order By: Relevance
“…The tone of system responses needs to be adjusted according to user's emotional states and affects. So the system needs to ground in affect or emotion of the user [30,104,109].…”
Section: Open-domain Dialog Vs Task-oriented Dialogmentioning
confidence: 99%
“…The tone of system responses needs to be adjusted according to user's emotional states and affects. So the system needs to ground in affect or emotion of the user [30,104,109].…”
Section: Open-domain Dialog Vs Task-oriented Dialogmentioning
confidence: 99%
“…We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) (Cho et al, 2014) and Convolutional Neural Network (CNN) (as designed in Kim (2014)) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., Xu et al (2018); Abdul-Mageed and Ungar (2017)). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have.…”
Section: Supervised Modelsmentioning
confidence: 99%
“…Embeddings Previous works have extensively explored different representations such as word (Mikolov et al, 2013;Pennington et al, 2014;Grave et al, 2018;Xu et al, 2018), subword (Sennrich et al, 2016;Heinzerling and Strube, 2018), and character (dos Santos and Zadrozny, 2014; Wieting et al, 2016). Lample et al (2016) has successfully concatenated character and word embeddings to their model, showing the potential of combining multiple representations.…”
Section: Related Workmentioning
confidence: 99%