Proceedings of the First Workshop on Gender Bias in Natural Language Processing 2019
DOI: 10.18653/v1/w19-3819
|View full text |Cite
|
Sign up to set email alerts
|

Gendered Pronoun Resolution using BERT and an Extractive Question Answering Formulation

Abstract: The resolution of ambiguous pronouns is a longstanding challenge in Natural Language Understanding. Recent studies have suggested gender bias among state-of-the-art coreference resolution systems. As an example, Google AI Language team recently released a genderbalanced dataset and showed that performance of these coreference resolvers is significantly limited on the dataset. In this paper, we propose 1 an extractive question answering (QA) formulation of pronoun resolution task that overcomes this limitation … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…As a result, we excluded them from our counts for techniques as well. We cite the papers here; most propose techniques we would have categorized as "Questionable correlations," with a few as "Other representational harms" (Abzaliev, 2019; Attree, 2019; Bao and Qiao, 2019;Chada, 2019;Ionita et al, 2019;Lois et al, 2019;Wang, 2019;Xu and Yang, 2019;Yang et al, 2019).…”
Section: Acknowledgmentsmentioning
confidence: 99%
“…As a result, we excluded them from our counts for techniques as well. We cite the papers here; most propose techniques we would have categorized as "Questionable correlations," with a few as "Other representational harms" (Abzaliev, 2019; Attree, 2019; Bao and Qiao, 2019;Chada, 2019;Ionita et al, 2019;Lois et al, 2019;Wang, 2019;Xu and Yang, 2019;Yang et al, 2019).…”
Section: Acknowledgmentsmentioning
confidence: 99%
“…Question Answering (QA) is an active area of research in Natural Language Processing and the recent advances in pre-trained language models enabled lots of rapid progress in the field , (Brown et al, 2020), (Bao et al, 2020), (Raffel et al, 2020)). QA is also used as a format to cast several NLP problems (McCann et al, 2018), (Chada, 2019). A common way to build a high performing question answering model is to fine-tune these pre-trained models on the entire training dataset -either via a span-extraction objective (Lan et al, 2020), (Clark et al, 2020b), (Bao et al, 2020) or a span-generation objective (Raffel et al, 2020).…”
Section: Multilingual Resultsmentioning
confidence: 99%
“…The above evaluation scheme does not use the fact that there are only three marked mentions in each snippet. There are however previous works (Attree, 2019;Chada, 2019) that consider the goldtwo-mention task (Webster et al, 2018), where the locations of the gold names and pronoun are used during inference as well 1 . We will compare our results in both scenarios: detected-mentions, where models need to detect the mentions by themselves, and gold-two-mention.…”
Section: Methodsmentioning
confidence: 99%