Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.541
|View full text |Cite
|
Sign up to set email alerts
|

Fair and Argumentative Language Modeling for Computational Argumentation

Abstract: Although much work in NLP has focused on measuring and mitigating stereotypical bias in semantic spaces, research addressing bias in computational argumentation is still in its infancy. In this paper, we address this research gap and conduct a thorough investigation of bias in argumentative language models. To this end, we introduce AB BA , a novel resource for bias measurement specifically tailored to argumentation. We employ our resource to assess the effect of argumentative fine-tuning and debiasing on the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 36 publications
0
1
0
Order By: Relevance
“…We can clearly observe a catastrophic drop in smoothed entropy for beam and greedy search whereas smoothed entropy of well-tuned samplingbased decoding algorithms stays mostly within the stable entropy zone. These stochastic decoding algorithms are also known to produce better completions (Holtermann et al, 2022). We postulate that these two things might be related.…”
Section: Stable Entropy Hypothesismentioning
confidence: 91%
“…We can clearly observe a catastrophic drop in smoothed entropy for beam and greedy search whereas smoothed entropy of well-tuned samplingbased decoding algorithms stays mostly within the stable entropy zone. These stochastic decoding algorithms are also known to produce better completions (Holtermann et al, 2022). We postulate that these two things might be related.…”
Section: Stable Entropy Hypothesismentioning
confidence: 91%
“…We experiment with all models described in §3.2.1 in a fine-tuning setup. Following established work on argument scoring (e.g., Gretz et al, 2020;Holtermann et al, 2022), we concatenate the description d with the action a using a separator token (e.g.,…”
Section: Canonical Rebuttal Scoringmentioning
confidence: 99%
“…Intermediate language modeling on texts from the same or similar distribution as the downstream data has been shown to lead to improvements on various NLP tasks (e.g., Gururangan et al, 2020). During this process, the goal is to inject additional information into the PLM and thus specialize the model for a particular domain (e.g., Aharoni and Goldberg, 2020;Hung et al, 2022a;Bombieri et al, 2023) or language (e.g., Glavaš et al, 2020) or to encode other types of knowledge such as common sense knowledge (e.g., Lauscher et al, 2020a), argumentation knowledge (e.g., Holtermann et al, 2022), or geographic knowledge (e.g., Hofmann et al, 2022).…”
Section: Related Workmentioning
confidence: 99%