2020
DOI: 10.48550/arxiv.2005.00547
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GoEmotions: A Dataset of Fine-Grained Emotions

Abstract: Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a fine-grained typology, adaptable to multiple downstream tasks. We introduce GoEmotions, the largest manually annotated dataset of 58k English Reddit comments, labeled for 27 emotion categories or Neutral. We demonstrate the high quality of the annotations via Principal Preserved Component … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
115
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 50 publications
(116 citation statements)
references
References 11 publications
1
115
0
Order By: Relevance
“…The classes of lowest prevalence, such as scared, had the poorest results, while the more frequent classes, such as adoring, approving, and saddened had the highest results. To put these results in further perspective, we note that they are on par with applying BERT-based models to the related task of emotion detection in Demszky et al (2020). Specifically, using similar hyper-parameters, that work achieved a macro-averaged F1-score of 0.64 for a taxonomy of 6 labels.…”
Section: Creating and Evaluating Care-bertmentioning
confidence: 84%
See 4 more Smart Citations
“…The classes of lowest prevalence, such as scared, had the poorest results, while the more frequent classes, such as adoring, approving, and saddened had the highest results. To put these results in further perspective, we note that they are on par with applying BERT-based models to the related task of emotion detection in Demszky et al (2020). Specifically, using similar hyper-parameters, that work achieved a macro-averaged F1-score of 0.64 for a taxonomy of 6 labels.…”
Section: Creating and Evaluating Care-bertmentioning
confidence: 84%
“…CARE-BERT provides strong baseline performance for the task of predicting affective response, on par with the SOTA models for emotion recognition. Furthermore, we show that CARE-BERT can be used for transfer learning to a different emotion-recognition task, achieving similar performance to Demszky et al (2020) which relied on manually-labeled training data.…”
Section: Introductionmentioning
confidence: 85%
See 3 more Smart Citations