Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/77
|View full text |Cite
|
Sign up to set email alerts
|

Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models

Abstract: Word embedding models reflect bias towards genders, ethnicities, and other social groups present in the underlying training data. Metrics such as ECT, RNSB, and WEAT quantify bias in these models based on predefined word lists representing social groups and bias-conveying concepts. How suitable these lists actually are to reveal bias - let alone the bias metrics in general - remains unclear, though. In this paper, we study how to assess the quality of bias metrics for word embedding models. In particular, we p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…The recent debiasing models (Bolukbasi et al, 2016;Wang et al, 2020) have only focused on removing gender bias in pre-trained word embeddings, particularly GloVe (Pennington et al, 2014), which has surfaced several social biases (Spliethöver and Wachsmuth, 2021). In this paper, we propose to mitigate five types of biases in GloVe embeddings, i.e., gender, race, religion, age, and LGBTQ+.…”
Section: Mitigating Multiple Biases In Glovementioning
confidence: 99%
“…The recent debiasing models (Bolukbasi et al, 2016;Wang et al, 2020) have only focused on removing gender bias in pre-trained word embeddings, particularly GloVe (Pennington et al, 2014), which has surfaced several social biases (Spliethöver and Wachsmuth, 2021). In this paper, we propose to mitigate five types of biases in GloVe embeddings, i.e., gender, race, religion, age, and LGBTQ+.…”
Section: Mitigating Multiple Biases In Glovementioning
confidence: 99%
“…Using WEAT makes our results comparable with related work. We calculate WEAT scores using the implementation of the WEFE framework (Badilla et al, 2020) and use word lists of Spliethöver and Wachsmuth (2021).…”
Section: Evaluating Social Bias In Embeddingsmentioning
confidence: 99%
“…This impacts generalization performance negatively (Shah et al, 2020) and may have harmful consequences in practical applications (Bender et al, 2021;Joseph and Morgan, 2020). So far, one hurdle to mitigate these problems is the limited reliability of common measures of social bias present in a corpus (Spliethöver and Wachsmuth, 2021), stemming from embedding training algorithms not tailored to low-resource situations (Knoche et al, 2019;Spinde et al, 2021).…”
Section: Introductionmentioning
confidence: 99%