Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1017
|View full text |Cite
|
Sign up to set email alerts
|

Obtaining Reliable Human Ratings of Valence, Arousal, and Dominance for 20,000 English Words

Abstract: Words play a central role in language and thought. Factor analysis studies have shown that the primary dimensions of meaning are valence, arousal, and dominance (VAD). We present the NRC VAD Lexicon, which has human ratings of valence, arousal, and dominance for more than 20,000 English words. We use Best-Worst Scaling to obtain fine-grained scores and address issues of annotation consistency that plague traditional rating scale methods of annotation. We show that the ratings obtained are vastly more reliable … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
303
0
3

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 459 publications
(358 citation statements)
references
References 53 publications
(48 reference statements)
3
303
0
3
Order By: Relevance
“…We focus on an objective corpus‐based approach, to avoid such potential criticisms. Second, in a similar vein, we decided to use an emotional model that learns affective information indirectly, by predicting the co‐occurrence of emojis and text in a corpus, rather than using emotional representations derived directly from valence, arousal, and dominance norms (Mohammad, 2018; Warriner et al, 2013). This also increases the coverage of our model.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We focus on an objective corpus‐based approach, to avoid such potential criticisms. Second, in a similar vein, we decided to use an emotional model that learns affective information indirectly, by predicting the co‐occurrence of emojis and text in a corpus, rather than using emotional representations derived directly from valence, arousal, and dominance norms (Mohammad, 2018; Warriner et al, 2013). This also increases the coverage of our model.…”
Section: Discussionmentioning
confidence: 99%
“…The model is very different from the one by De Deyne et al (2018), which was constructed by concatenating valence, arousal, and potency ratings, for men and women separately (i.e., six dimensions), from the study by Warriner, Kuperman, and Brysbaert (2013), with valence, arousal, and dominance ratings, from the study by Mohammad (2018). DeepMoji provides better representations for our purposes than ratings because first, a model trained over a corpus of tweets, rather than subjective ratings, makes the emotion model more comparable to the linguistic and visual models, both trained over corpora.…”
Section: Methodsmentioning
confidence: 97%
See 1 more Smart Citation
“…We focus on an objective corpus-based approach, to avoid such potential criticisms. Secondly, in a similar vein, we decided to use an emotional model that learns affective information indirectly, by predicting the co-occurrence of emojis and text in a corpus, rather than using emotional representations derived directly from valence, arousal and dominance norms (Mohammad, 2018;Warriner, Kuperman, & Brysbaert, 2013). This also increases the coverage of our model.…”
Section: Discussionmentioning
confidence: 99%
“…The model is very different from the one by De Deyne et al (2018), which was constructed by concatenating valence, arousal, and potency ratings, for men and women separately (i.e., 6 dimensions), from the study by Warriner, Kuperman, and Brysbaert (2013), with valence, arousal, and dominance ratings, from the study by Mohammad (2018). DeepMoji provides better representations for our purposes than ratings because firstly, a model trained over a corpus of tweets, rather than subjective ratings, makes the emotion model more comparable to the linguistic and visual models, both trained over corpora.…”
Section: Emotion Modelmentioning
confidence: 94%