Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.265
|View full text |Cite
|
Sign up to set email alerts
|

Towards Understanding Gender Bias in Relation Extraction

Abstract: Recent developments in Neural Relation Extraction (NRE) have made significant strides towards automated knowledge base construction. While much attention has been dedicated towards improvements in accuracy, there have been no attempts in the literature to evaluate social biases exhibited in NRE systems. In this paper, we create WikiGenderBias, a distantly supervised dataset composed of over 45,000 sentences including a 10% human annotated test set for the purpose of analyzing gender bias in relation extraction… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 23 publications
(19 citation statements)
references
References 44 publications
0
19
0
Order By: Relevance
“…Gender affects myriad aspects of NLP, including corpora, tasks, algorithms, and systems Costa-jussà, 2019;Sun et al, 2019). For example, statistical gender biases are rampant in word embeddings (Jurgens et al, 2012;Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018;Zhao et al, 2018b;Basta et al, 2019;Chaloner and Maldonado, 2019;Du et al, 2019;Ethayarajh et al, 2019;Kaneko and Bollegala, 2019;Kurita et al, 2019;-including multilingual ones (Escudé Font and Costa-jussà, 2019;Zhou et al, 2019)-and affect a wide range of downstream tasks including coreference resolution (Zhao et al, 2018a;Cao and Daumé III, 2020;Emami et al, 2019), part-ofspeech and dependency parsing (Garimella et al, 2019), language modeling (Qian et al, 2019;Nangia et al, 2020), appropriate turn-taking classification (Lepp, 2019), relation extraction (Gaut et al, 2020), identification of offensive content (Sharifirad and Matwin, 2019;, and machine translation (Stanovsky et al, 2019;Hovy et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Gender affects myriad aspects of NLP, including corpora, tasks, algorithms, and systems Costa-jussà, 2019;Sun et al, 2019). For example, statistical gender biases are rampant in word embeddings (Jurgens et al, 2012;Bolukbasi et al, 2016;Caliskan et al, 2017;Garg et al, 2018;Zhao et al, 2018b;Basta et al, 2019;Chaloner and Maldonado, 2019;Du et al, 2019;Ethayarajh et al, 2019;Kaneko and Bollegala, 2019;Kurita et al, 2019;-including multilingual ones (Escudé Font and Costa-jussà, 2019;Zhou et al, 2019)-and affect a wide range of downstream tasks including coreference resolution (Zhao et al, 2018a;Cao and Daumé III, 2020;Emami et al, 2019), part-ofspeech and dependency parsing (Garimella et al, 2019), language modeling (Qian et al, 2019;Nangia et al, 2020), appropriate turn-taking classification (Lepp, 2019), relation extraction (Gaut et al, 2020), identification of offensive content (Sharifirad and Matwin, 2019;, and machine translation (Stanovsky et al, 2019;Hovy et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Recently, the NLP community has focused on exploring gender bias in NLP systems (Sun et al, 2019), uncovering many gender disparities and harmful biases in algorithms and text (Cao and Chang and McKeown 2019;Costa-jussà 2019;Du et al 2019;Emami et al 2019;Garimella et al 2019;Gaut et al 2020;Habash et al 2019;Hashempour 2019;Hoyle et al 2019;Lee et al 2019a;Lepp 2019;Qian 2019;Sharifirad and Matwin 2019;Stanovsky et al 2019;O'Neil 2016;Blodgett et al 2020;Nangia et al 2020). Particular attention has been paid to uncovering, analyzing, and removing gender biases in word embeddings (Basta et al, 2019;Kaneko and Bollegala, 2019;Zhao et al, , 2018bBolukbasi et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Biases have been studied in many information extraction tasks, such as relation extraction (Gaut et al, 2020), named entity recognition (Mehrabi et al, 2020), and coreference resolution (Zhao et al, 2018a). Nevertheless, not many works investigate biases in event extraction tasks, particularly ACE05.…”
Section: Ethicsmentioning
confidence: 99%