2023
DOI: 10.1007/s00521-023-08276-8
|View full text |Cite
|
Sign up to set email alerts
|

Transformer transfer learning emotion detection model: synchronizing socially agreed and self-reported emotions in big data

Abstract: Tactics to determine the emotions of authors of texts such as Twitter messages often rely on multiple annotators who label relatively small data sets of text passages. An alternative method gathers large text databases that contain the authors’ self-reported emotions, to which artificial intelligence, machine learning, and natural language processing tools can be applied. Both approaches have strength and weaknesses. Emotions evaluated by a few human annotators are susceptible to idiosyncratic biases that refl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…Their analysts work with structured and unstructured data by applying advanced techniques, such as Natural Language Processing (NLP) and deep learning. The n-grams approach is a valuable pre-processing solution for NLP (Hartmann et al, 2019; Lee et al, 2023), wherein frequently occurring word combinations are coded quantitatively and non-informative features in natural language data are reduced. This results in coded data that can be analysed; Web Appendix E of Hartmann et al (2019) provides relevant guidelines.…”
Section: Analytical Companies: Customer-centric ML Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Their analysts work with structured and unstructured data by applying advanced techniques, such as Natural Language Processing (NLP) and deep learning. The n-grams approach is a valuable pre-processing solution for NLP (Hartmann et al, 2019; Lee et al, 2023), wherein frequently occurring word combinations are coded quantitatively and non-informative features in natural language data are reduced. This results in coded data that can be analysed; Web Appendix E of Hartmann et al (2019) provides relevant guidelines.…”
Section: Analytical Companies: Customer-centric ML Modelsmentioning
confidence: 99%
“…Recently, transformer models such as BERT and RoBERTa, which are based on deep learning architectures, have been found to outperform other ML techniques in analyzing natural language (Ko et al, 2023; Lee et al, 2023). These models are pre-trained language representation models, as also applied in ChatGPT.…”
Section: Analytical Companies: Customer-centric ML Modelsmentioning
confidence: 99%
“…Such ML-based systems have their own limitations. In addition, it is currently uncertain how much text and label data is necessary for training models based on self-reported mood data sets.. (Lee 2023) . Social networking sites and microblogging services, such as Facebook and Twitter, provide an unprecedented quantity of (politically pertinent) user-generated content.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Table I presents the works in the literature which focused on predicting emotions and/or sentiments on social media using machine learning [25], [30]- [37], [40], [41]. This is based on the properties of the considered datasets, the emotion detection algorithms used, and the annotation strategy.…”
Section: A Related Workmentioning
confidence: 99%
“…Most of these works [31]- [37] focused on detecting the sentiments (negative, neutral, and/or positive). The rest of the works focused on a few emotions, mainly Fear, Sadness and Anger [25], [30], [31], [40], [41], [44], [45].…”
Section: A Related Workmentioning
confidence: 99%