Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment &Amp; Social Media Analysis 2022
DOI: 10.18653/v1/2022.wassa-1.31
|View full text |Cite
|
Sign up to set email alerts
|

CAISA at WASSA 2022: Adapter-Tuning for Empathy Prediction

Abstract: We build a system that leverages adapters, a light weight and efficient method for leveraging large language models to perform the task Empathy and Distress prediction tasks for WASSA 2022. In our experiments, we find that stacking our empathy and distress adapters on a pretrained emotion classification adapter performs best compared to full fine-tuning approaches and emotion feature concatenation. We make our experimental code publicly available. 1

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Most relevant to our work are recent advances in social and emotional commonsense reasoning using using language models. Specifically, prior methods have used finetuning of language models such as BERT (Devlin et al, 2019;Reimers and Gurevych, 2019) and GPT-2 (Radford et al) to model events and the emotional reactions caused by everyday events (Rashkin et al, , 2018Sap et al, 2019b;Bosselut et al, 2019;Wang et al, 2022;West et al, 2022;Mostafazadeh et al, 2020) as well as predicting empathy, condolence, or prosocial outcomes (Lahnala et al, 2022a;Kumano et al;Boukricha et al, 2013;Zhou and Jurgens, 2020;Bao et al, 2021). Understanding the emotional reactions elicited by events is a challenging task for many NLP systems, as it requires commonsense knowledge and extrapolation of meanings beyond the text alone.…”
Section: Related Workmentioning
confidence: 99%
“…Most relevant to our work are recent advances in social and emotional commonsense reasoning using using language models. Specifically, prior methods have used finetuning of language models such as BERT (Devlin et al, 2019;Reimers and Gurevych, 2019) and GPT-2 (Radford et al) to model events and the emotional reactions caused by everyday events (Rashkin et al, , 2018Sap et al, 2019b;Bosselut et al, 2019;Wang et al, 2022;West et al, 2022;Mostafazadeh et al, 2020) as well as predicting empathy, condolence, or prosocial outcomes (Lahnala et al, 2022a;Kumano et al;Boukricha et al, 2013;Zhou and Jurgens, 2020;Bao et al, 2021). Understanding the emotional reactions elicited by events is a challenging task for many NLP systems, as it requires commonsense knowledge and extrapolation of meanings beyond the text alone.…”
Section: Related Workmentioning
confidence: 99%
“…In this approach, a dataset of labeled dialogues is used to train a model. This approach has been used in several studies (Bentis, 2021), (Buechel et al, 2018), (Chen et al, 2022), (Lahnala et al, 2022) and (Hosseini and Caragea, 2021). Another evaluation technique is the to detect the emotions expressed in dialogue.…”
Section: Related Workmentioning
confidence: 99%
“…Other studies have focused on the use of deep learning techniques, such as recurrent neural networks (RNNs) and transformer models, to predict empathy in text. These approaches have shown promising results, with some studies reporting high levels of accuracy in empathy prediction (Chen et al, 2022), (Lahnala et al, 2022) and (Hosseini and Caragea, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…The significance of data in this context cannot be overstated; it plays a pivotal role in shaping a model's ability to comprehend nuances, contextual information, and idiomatic expressions specific to a particular language [6]. Additionally, fine-tuning data must accurately mirror the linguistic intricacies and cultural nuances of the target language [7]. For languages with unique linguistic characteristics and cultural contexts [8], such as Vietnamese, acquiring suitable data becomes a formidable challenge.…”
Section: Introductionmentioning
confidence: 99%