Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.363
|View full text |Cite
|
Sign up to set email alerts
|

Mitigating Toxic Degeneration with Empathetic Data: Exploring the Relationship Between Toxicity and Empathy

Abstract: Content Warning: This paper includes examples of religious-based discriminatory language that may be offensive and upsetting.Large pre-trained neural language models have supported the effectiveness of many NLP tasks, yet are still prone to generating toxic language hindering the safety of their use. Using empathetic data, we improve over recent work on controllable text generation that aims to reduce the toxicity of generated text. We find we are able to dramatically reduce the size of finetuning data to 7.5-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…However, we see no improvement from using the EPITOME data. Similarly, recent work found separate empathy types were found to have different effects on toxicity reduction (Lahnala et al, 2022).…”
Section: Resultsmentioning
confidence: 83%
“…However, we see no improvement from using the EPITOME data. Similarly, recent work found separate empathy types were found to have different effects on toxicity reduction (Lahnala et al, 2022).…”
Section: Resultsmentioning
confidence: 83%
“…Specifically, we only fine-tuned on one medical dataset and one empathy dataset. As argued by Lahnala et al (2022), there are limitations in the way that empathy datasets are crafted, particularly concerning applications such as ours that aim for assessing cognitive empathic skills rather than surface-level emotional response.…”
Section: Discussionmentioning
confidence: 99%
“…One of the decoding approaches used to mitigate toxicity in model output was presented in Lahnala et al [48], where they modified the model's probabilities based on an expert-desired attribute, namely empathy. By favouring tokens that express a more empathic response, they observed a correlation between empathy and toxicity, albeit dependent on the context.…”
Section: Decoding Techniquesmentioning
confidence: 99%