Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.355
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Stereotypes in Generative Text Inference Tasks

Abstract: Stereotypes are inferences drawn about people based on their demographic attributes, which may result in harms to users when a system is deployed. In generative language-inference tasks, given a premise, a model produces plausible hypotheses that follow either logically (natural language inference) or commonsensically (commonsense inference). Such tasks are therefore a fruitful setting in which to explore the degree to which NLP systems encode stereotypes. In our work, we study how stereotypes manifest when th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 45 publications
0
2
0
Order By: Relevance
“…Table 3 lists all the individual social groups we cover in this work. We manually construct the list by combining and picking groups from the list of social groups from Sotnikova et al (2021) and Koch et al (2016) and also adding social groups we think are stereotyped in U.S. culture.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…Table 3 lists all the individual social groups we cover in this work. We manually construct the list by combining and picking groups from the list of social groups from Sotnikova et al (2021) and Koch et al (2016) and also adding social groups we think are stereotyped in U.S. culture.…”
Section: Implementation Detailsmentioning
confidence: 99%
“…Much existing work focuses on diagnosing representational harms with bias probe tasks: tasks that measure whether a model's predictions differ between two (or more) groups of interest. A number of probe tasks have been proposed: Rudinger, May, and Van Durme (2017); Sheng et al (2019); Bordia and Bowman (2019); Lee, Madotto, and Fung (2019); Liu et al (2019a);May et al (2019); Nadeem, Bethke, and Reddy (2021); Sotnikova et al (2021) and others. Most of these focus on gender stereotypes.…”
Section: Social Biases In Large Language Modelsmentioning
confidence: 99%