2020
DOI: 10.48550/arxiv.2004.07667
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(49 citation statements)
references
References 0 publications
0
34
0
Order By: Relevance
“…compute the counterfactual representation by pre-training an additional instance of the language representation model employed by the classifier, with an adversarial component designed to "forget" the concept of choice, while controlling for confounding concepts. Ravfogel et al (2020) offered a method for removing information from neural representations by iteratively training linear classifiers and projecting the representations on their null-spaces.…”
Section: Causal Model Interpretationsmentioning
confidence: 99%
“…compute the counterfactual representation by pre-training an additional instance of the language representation model employed by the classifier, with an adversarial component designed to "forget" the concept of choice, while controlling for confounding concepts. Ravfogel et al (2020) offered a method for removing information from neural representations by iteratively training linear classifiers and projecting the representations on their null-spaces.…”
Section: Causal Model Interpretationsmentioning
confidence: 99%
“…We have selected a subset of the Blogs data for this experiment, where author occupation is either student or arts, and the age is either teen or adult (two domain obfuscation). we have taken an approach similar to that of Ravfogel et al (2020), where we create 4 different levels of imbalance. In all cases, the dataset is balanced with respect to both occupation and age.…”
Section: Fairness Resultsmentioning
confidence: 99%
“…A large body of prior work has attempted to address algorithmic bias by modifying different stages of the natural language processing (NLP) pipeline. For example, Ravfogel et al (2020) attempt to de-bias word embeddings used by NLP systems, while Elazar and Goldberg (2018) address the bias in learned model representations and encodings. While effective in many cases, such approaches do nothing to mitigate bias in decisions made by humans based on text.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Manzini et al (2019) extended this work to the multi-class setting, enabling debiasing in race and religion. Concurrent to their work, (Ravfogel et al, 2020) propose iterative null space 3 Experimental Setup…”
Section: Debiasing Word Embeddingsmentioning
confidence: 99%