2023
DOI: 10.1109/taffc.2022.3204972
|View full text |Cite
|
Sign up to set email alerts
|

The Biases of Pre-Trained Language Models: An Empirical Study on Prompt-Based Sentiment Analysis and Emotion Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 92 publications
(34 citation statements)
references
References 38 publications
0
34
0
Order By: Relevance
“…The MSM model is based on neurosymbolic learning systems. An analysis was also performed to check the bias of the pre-trained learning model for sentimental analysis and emotion detection 24 .…”
Section: Related Workmentioning
confidence: 99%
“…The MSM model is based on neurosymbolic learning systems. An analysis was also performed to check the bias of the pre-trained learning model for sentimental analysis and emotion detection 24 .…”
Section: Related Workmentioning
confidence: 99%
“…Linguistic metaphor identification has been widely studied with the help of two shared tasks (Leong et al, 2018(Leong et al, , 2020 and the large-scale annotated VUA dataset (Steen et al, 2010b). Scholars have also noticed the connection between linguistic metaphor processing and other tasks, such as affective computing (Xing et al, 2020;Duong et al, 2022;Mao et al, 2022b;Cambria et al, 2022a;Ma et al, 2023). However, there are sub-types of linguistic metaphors that have not been well studied yet, such as extended metaphors and metaphoric MWEs.…”
Section: Discussionmentioning
confidence: 99%
“…Empirical study was performed on prompt-based sentiment analysis and emotion detection 19 in order to understand the bias towards pre-trained models applied for affective computing. The findings suggest that the number of label classes, emotional label-word selections, prompt templates and positions, and the word forms of emotion lexicons are factors that biased the pre-trained models 20 .…”
Section: Related Workmentioning
confidence: 98%
“…This work discusses about the way for the development of more bioinspired approaches to the design of intelligent sentiment-mining systems that can handle semantic knowledge, make analogies, learn new affective knowledge, and detect, perceive, and “feel” emotions. In 20 , the authors proposed commonsense-based neurosymbolic framework that employed unsupervised and reproducible subsymbolic techniques such as auto-regressive language models and kernel methods to build trustworthy symbolic representations that convert natural language to a sort of protolanguage and, hence, extract polarity from text in a completely interpretable and explainable manner 22 , 23 .…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation