2019
DOI: 10.48550/arxiv.1904.08783
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Evaluating the Underlying Gender Bias in Contextualized Word Embeddings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
27
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(30 citation statements)
references
References 13 publications
2
27
0
1
Order By: Relevance
“…Along similar lines, Basta et al (2019) noted that contextual word-embeddings are less biased than traditional word-embeddings. Yet, biases like gender are propagated heavily in downstream tasks.…”
Section: Real World Implicationsmentioning
confidence: 84%
“…Along similar lines, Basta et al (2019) noted that contextual word-embeddings are less biased than traditional word-embeddings. Yet, biases like gender are propagated heavily in downstream tasks.…”
Section: Real World Implicationsmentioning
confidence: 84%
“…While research on the topic of machine unlearning [Bourtoule et al 2019;Cao and Yang 2015] has started to gain traction, the problem has not yet been studied in depth for foundation models. In addition, foundation models trained on less curated internet data have been shown to exhibit harmful biases targeting specific groups (e.g., gender and racial bias) [Bender et al 2021;Basta et al 2019;Kurita et al 2019;] and can produce toxic outputs [Gehman et al 2020] ( §5.2: misuse). While strategies such as further fine-tuning the foundation model on carefully curated datasets (for potentially multiple generations) [Solaiman and Dennison 2021] or applying controllable generation techniques [Keskar et al 2019] have shown some success in mitigating harmful behavior, a framework for training equitable and safe foundation models ( §5.1: fairness) will likely require further research with a collective effort across the data collection, training, and adaptation phases as well as consultation with domain experts.…”
Section: Use Cases For Adaptationmentioning
confidence: 99%
“…However, their focus is different from us as our approaches aim at keeping the grammatical gender information and only removing the bias in semantic genders. A few recent studies focus on measuring and reducing gender bias in contextualized word embeddings (Zhao et al, 2019;May et al, 2019;Basta et al, 2019). However, they only focus on English embeddings in which the gender is mostly only expressed by pronouns (Stahlberg et al, 2007).…”
Section: Related Workmentioning
confidence: 99%