2019
DOI: 10.48550/arxiv.1911.03642
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Understanding Gender Bias in Relation Extraction

Abstract: Recent developments in Neural Relation Extraction (NRE) have made significant strides towards Automated Knowledge Base Construction (AKBC). While much attention has been dedicated towards improvements in accuracy, there have been no attempts in the literature to our knowledge to evaluate social biases in NRE systems. We create WikiGender-Bias, a distantly supervised dataset with a human annotated test set. WikiGenderBias has sentences specifically curated to analyze gender bias in relation extraction systems. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…As language technologies and NLP become more widely deployed in various aspects of society, there are concerns about the harms they cause to various demographic groups. The focus of our work was on toxic text classification, but prior work has also revealed bias issues along demographic dimensions like race, sexuality, and gender through identity terms for other NLP tasks including question answering [38], relation extraction [25], occupation prediction [16], autocomplete generation [57], and machine translation [60]. In this section, we provide design recommendations based on our findings.…”
Section: Design Recommendationsmentioning
confidence: 99%
See 1 more Smart Citation
“…As language technologies and NLP become more widely deployed in various aspects of society, there are concerns about the harms they cause to various demographic groups. The focus of our work was on toxic text classification, but prior work has also revealed bias issues along demographic dimensions like race, sexuality, and gender through identity terms for other NLP tasks including question answering [38], relation extraction [25], occupation prediction [16], autocomplete generation [57], and machine translation [60]. In this section, we provide design recommendations based on our findings.…”
Section: Design Recommendationsmentioning
confidence: 99%
“…Biased metrics or group fairness measures are used to show the differences between demographic groups produced by a model. These NLP tasks include question answering [38], relation extraction [25], text classification [16], autocomplete generation [57] and machine translation [60]. In these tasks, demographic dimensions are an identity axis on which they are evaluated for bias.…”
Section: Ai Fairness and Decision Makingmentioning
confidence: 99%
“…For example, of black female names [59]. Gender disparities can occur in neural relation extraction systems when extracting specific links between entities (occupation [32]). As a result, bias…”
Section: Bias Can Be Found In the Data Used To Construct Sw Technologiesmentioning
confidence: 99%