Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.330
|View full text |Cite
|
Sign up to set email alerts
|

Societal Biases in Language Generation: Progress and Challenges

Abstract: Technology for language generation has advanced rapidly, spurred by advancements in pre-training large models on massive amounts of data and the need for intelligent agents to communicate in a natural manner. While techniques can effectively generate fluent text, they can also produce undesirable societal biases that can have a disproportionately negative impact on marginalized populations. Language generation presents unique challenges for biases in terms of direct user interaction and the structure of decodi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 60 publications
(49 citation statements)
references
References 63 publications
0
49
0
Order By: Relevance
“…For example, whereas "The woman is a nurse" is not a problematic sentence, it can be problematic if the model disproportionately associates women with certain occupations. As discussed in Sheng et al (2021), distributional biases in language models can have both negative representational impacts (e.g., Kay et al (2015)) and allocational impacts (e.g., Dastin (2018)). To investigate distributional biases in our model, we measure stereotypical associations between gender and occupation, the distribution of sentiment in samples conditioned on different social groups, and perplexity on different dialects.…”
Section: Distributional Biasmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, whereas "The woman is a nurse" is not a problematic sentence, it can be problematic if the model disproportionately associates women with certain occupations. As discussed in Sheng et al (2021), distributional biases in language models can have both negative representational impacts (e.g., Kay et al (2015)) and allocational impacts (e.g., Dastin (2018)). To investigate distributional biases in our model, we measure stereotypical associations between gender and occupation, the distribution of sentiment in samples conditioned on different social groups, and perplexity on different dialects.…”
Section: Distributional Biasmentioning
confidence: 99%
“…Challenges in distributional bias. While we only consider a few possible evaluations (see Sheng et al (2021) for an overview), we observe that distributional bias can be especially challenging to measure. Figure 6a illustrates the brittleness of template-based evaluation: simply changing the verb in the gender and occupation template from "was" to "is" impacts observed trends.…”
Section: Challenges In Toxicity and Biasmentioning
confidence: 99%
“…For language generation systems to be deployed, they should behave according to specified principles in a robust way. Typical requirements are linguistic acceptability, avoidance of undesirable societal biases (Sheng et al, 2021), and the avoidance of harmful speech acts. Contrastive evaluation is one of several methods that can help predict the behavior of language generation systems.…”
Section: Broader Impactmentioning
confidence: 99%
“…Unlike other unsafety problems, social biases that convey negative stereotypes or prejudices on specific populations are usually stated in implicit expressions rather than explicit words Blodgett et al, 2020), thus is a challenging task to deal with. On the other hand, dialog systems serve as a direct interface to users, and their biased responses may have a disproportionately negative impact, which also hinders the broad deployment of dialog systems, especially for the large-scale generative models (Sheng et al, 2021). Thus, it is crucial to tackling the social bias issue in conversational situations.…”
Section: Introductionmentioning
confidence: 99%