Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.486
|View full text |Cite
|
Sign up to set email alerts
|

Social Bias Frames: Reasoning about Social and Power Implications of Language

Abstract: Warning: this paper contains content that may be offensive or upsetting. We then establish baseline approaches that learn to recover SOCIAL BIAS FRAMES from unstructured text. We find that while stateof-the-art neural models are effective at highlevel categorization of whether a given statement projects unwanted social bias (80% F 1), they are not effective at spelling out more detailed explanations in terms of SOCIAL BIAS FRAMES. Our study motivates future work that combines structured pragmatic inference wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

3
239
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 198 publications
(243 citation statements)
references
References 56 publications
3
239
1
Order By: Relevance
“…Most relevant to this work, Sap et al (2020) proposed a framework for modeling pragmatic aspects of many social biases in text. Our work and theirs focus on complementary aspects of a larger goal-namely, making NLP safe and inclusive for everyone-but the two approaches differ in several ways.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most relevant to this work, Sap et al (2020) proposed a framework for modeling pragmatic aspects of many social biases in text. Our work and theirs focus on complementary aspects of a larger goal-namely, making NLP safe and inclusive for everyone-but the two approaches differ in several ways.…”
Section: Related Workmentioning
confidence: 99%
“…We treat statistical gender bias in human or model generated text specifically, and in detail. Sap et al (2020) proposed a different but compatible perspective, and aimed to situate gender bias within the broader landscape of negative stereotypes in social media text, an approach that can make parallels apparent across different kinds of harmful text. Moreover, they considered different pragmatic dimensions than we do: they targeted negatively stereotyped commonsense implications in arguably innocuous statements, whereas we investigate pragmatic dimensions that straightforwardly map to conversational roles (i.e., topics, addressees, and creators of text).…”
Section: Related Workmentioning
confidence: 99%
“…Our work differs from prior work on bias detection in NLP in that we infer bias from data in an unsupervised way, whereas prior work relies on crowd-sourced annotations (Fast et al, 2016;Bolukbasi et al, 2016;Wang and Potts, 2019;Sap et al, 2020). This work typically focuses on specific types of bias, such as condescension (Wang and Potts, 2019) or microaggressions (Breitfeller et al, 2019) and involves carefully constructed annotations schemes that are difficult to generalize to other data sets or types of bias.…”
Section: Related Workmentioning
confidence: 99%
“…They also often rely on human-defined "known" stereotypes, such as lists of traditionally male and female occupations obtained through crowd-sourcing, which restricts analysis to a narrow surface-level domain. Similarly, supervised approaches can provide insight into carefully defined types of bias (Wang and Potts, 2019;Breitfeller et al, 2019;Sap et al, 2020), but they rely on human annotations tasks, which are difficult to design or generalize to other domains, especially because social concepts differ across contexts and cultures (Dong et al, 2019).…”
Section: Introductionmentioning
confidence: 99%