Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment &Amp; Social Media Analysis 2022
DOI: 10.18653/v1/2022.wassa-1.22
|View full text |Cite
|
Sign up to set email alerts
|

Continuing Pre-trained Model with Multiple Training Strategies for Emotional Classification

Abstract: Emotion is the essential attribute of human beings. Perceiving and understanding emotions in a human-like manner is the most central part of developing emotional intelligence. This paper describes the contribution of the LingJing team's method to the Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA) 2022 shared task on Emotion Classification. The participants are required to predict seven emotions from empathic responses to news or stories that caused harm to indiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…Fine-tuning the models with small amounts of available labeled data produced much better results than earlier methods. Recent research includes the generation of synthetic code-switched data (for better fine-tuning), intermediate task training (Prasad et al, 2021), continued pre-training (Li et al, 2022) (masked Language modeling using code-switched texts), late fusion (Mundra et al, 2021) (using multiple pre-trained transformer-based models and combining their outputs using some logic to improve overall performance), and Data Augmentation (Mundra et al, 2021) -both Random Augmentation (RA) and Balanced Augmentation (BA), and custom attention models (Li et al, 2021). Other techniques are used to improve sentiment analysis results in code-switched texts specific to language pairs or dataset specifications.…”
Section: Dataset Descriptionmentioning
confidence: 99%
“…Fine-tuning the models with small amounts of available labeled data produced much better results than earlier methods. Recent research includes the generation of synthetic code-switched data (for better fine-tuning), intermediate task training (Prasad et al, 2021), continued pre-training (Li et al, 2022) (masked Language modeling using code-switched texts), late fusion (Mundra et al, 2021) (using multiple pre-trained transformer-based models and combining their outputs using some logic to improve overall performance), and Data Augmentation (Mundra et al, 2021) -both Random Augmentation (RA) and Balanced Augmentation (BA), and custom attention models (Li et al, 2021). Other techniques are used to improve sentiment analysis results in code-switched texts specific to language pairs or dataset specifications.…”
Section: Dataset Descriptionmentioning
confidence: 99%