2021
DOI: 10.48550/arxiv.2110.01094
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Examples Generation for Reducing Implicit Gender Bias in Pre-trained Models

Abstract: Over the last few years, Contextualized Pretrained Neural Language Models, such as BERT, GPT, have shown significant gains in various NLP tasks. To enhance the robustness of existing pre-trained models, one way is adversarial examples generation and evaluation for conducting data augmentation or adversarial learning. In the meanwhile, gender bias embedded in the models seems to be a serious problem in practical applications. Many researches have covered the gender bias produced by word-level information(e.g. g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 20 publications
0
1
0
Order By: Relevance
“…Recently, [61] suggested the generation of implicit gender bias samples at sentence-level, which, along with a novel metric, can be used to accurately measure gender bias on contextualised embeddings. [28] proposed a fine-tuning method for debiasing word embeddings that can be applied to any pre-trained language model.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, [61] suggested the generation of implicit gender bias samples at sentence-level, which, along with a novel metric, can be used to accurately measure gender bias on contextualised embeddings. [28] proposed a fine-tuning method for debiasing word embeddings that can be applied to any pre-trained language model.…”
Section: Related Workmentioning
confidence: 99%