2022
DOI: 10.48550/arxiv.2205.11485
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Conditional Supervised Contrastive Learning for Fair Text Classification

Abstract: Contrastive representation learning has gained much attention due to its superior performance in learning representations from both image and sequential data. However, the learned representations could potentially lead to performance disparities in downstream tasks, such as increased silencing of underrepresented groups in toxicity comment classification. In light of this challenge, in this work, we study learning fair representations that satisfy a notion of fairness known as equalized odds for text classific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…To compare CLINIC with previous works, we compare against adversarial training (ADV) (Elazar and Goldberg, 2018;Coavoux et al, 2018) and the recently introduced Mutual Information upper bound (Colombo et al, 2021c) (I α ) which has been shown to offer more control over the degree of disentanglement than previous estimators. We compare CLINIC with the work of (Chi et al, 2022;Shen et al, 2021;Gupta et al, 2021;Shen et al, 2022) which uses a method that estimates I(Z; S) (see Eq. 4).…”
Section: Baselinesmentioning
confidence: 99%
“…To compare CLINIC with previous works, we compare against adversarial training (ADV) (Elazar and Goldberg, 2018;Coavoux et al, 2018) and the recently introduced Mutual Information upper bound (Colombo et al, 2021c) (I α ) which has been shown to offer more control over the degree of disentanglement than previous estimators. We compare CLINIC with the work of (Chi et al, 2022;Shen et al, 2021;Gupta et al, 2021;Shen et al, 2022) which uses a method that estimates I(Z; S) (see Eq. 4).…”
Section: Baselinesmentioning
confidence: 99%
“…It takes as input a pair of samples that are either similar or dissimilar, and it brings similar samples closer and dissimilar samples far apart in embedding space (Khosla et al, 2020). Such loss has shown model performance improvement compared to cross-entropy on multiple problems (Chi et al, 2022;Chen et al, 2022;Pan et al, 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Task-specific methods adopt the strategy of debiasing in the fine-tuning stage of the downstream task, of which the downstream task is known Chi et al, 2022). One representative work is INLP (Ravfogel et al, 2020(Ravfogel et al, , 2022, which repeatedly trains a linear classifier that predicts the target concept, and then projects the representation into the null space of the classifier's weight matrix to remove the representation bias.…”
Section: Task-specific Methodsmentioning
confidence: 99%