Proceedings of the 8th Workshop on Computational Approaches To Subjectivity, Sentiment and Social Media Analysis 2017
DOI: 10.18653/v1/w17-5235
|View full text |Cite
|
Sign up to set email alerts
|

UWat-Emote at EmoInt-2017: Emotion Intensity Detection using Affect Clues, Sentiment Polarity and Word Embeddings

Abstract: This paper describes the UWaterloo affect prediction system developed for EmoInt-2017. We delve into our feature selection approach for affect intensity, affect presence, sentiment intensity and sentiment presence lexica alongside pretrained word embeddings, which are utilized to extract emotion intensity signals from tweets in an ensemble learning approach. The system employs emotion specific model training, and utilizes distinct models for each of the emotion corpora in isolation. Our system utilizes gradien… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 14 publications
(4 reference statements)
0
3
0
Order By: Relevance
“…The syntactic and orthographic form of tweets often differs substantially from text belonging to other domains (John and Vechtomova, 2017). As such, pre-processing procedures are as important as the architecture of any given model.…”
Section: Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation
“…The syntactic and orthographic form of tweets often differs substantially from text belonging to other domains (John and Vechtomova, 2017). As such, pre-processing procedures are as important as the architecture of any given model.…”
Section: Preprocessingmentioning
confidence: 99%
“…The system described in this paper builds upon a survey of some of the best performing systems from previous related shared tasks Rosenthal et al, 2017). In particular, we draw inspiration from the systems described in (John and Vechtomova, 2017), which makes use of gradient boosted trees for regression; (Goel et al, 2017), which employs an ensemble of various neural models; and (Baziotis et al, 2017), which features Long Short Term Memory (LSTM) networks with an attention mechanism. Our work contributes to the aforementioned approaches by further developing a variety of neural architectures, using transfer learning via pretrained sentence encoders, testing methods of ensembling neural and non-neural models, and gauging the performance and stability of a regressor across languages.…”
Section: Introductionmentioning
confidence: 99%
“…However, these method can't utilize the contextual information from texts. Supervised methods are mainly based on SVR (Madisetty and Desarkar, 2017), linear regression (John and Vechtomova, 2017) and neural networks (Goel et al, 2017;Köper et al, 2017). Usually neural network-based methods outperform SVR and linear regression-based methods siginificantly.…”
Section: Introductionmentioning
confidence: 99%