2022
DOI: 10.48550/arxiv.2203.02110
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FairPrune: Achieving Fairness Through Pruning for Dermatological Disease Diagnosis

Abstract: Many works have shown that deep learning-based medical image classification models can exhibit bias toward certain demographic attributes like race, gender, and age. Existing bias mitigation methods primarily focus on learning debiased models, which may not necessarily guarantee all sensitive information can be removed and usually comes with considerable accuracy degradation on both privileged and unprivileged groups. To tackle this issue, we propose a method, FairPrune, that achieves fairness by pruning. Conv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Wu et. al [27] mitigate unfairness by pruning a pre-trained diagnosis model considering the difference of feature importance between subgroups. However, their method needs extra time except for training a precise classification model, while our FairAdaBN is a one-step method.…”
Section: Related Workmentioning
confidence: 99%
“…Wu et. al [27] mitigate unfairness by pruning a pre-trained diagnosis model considering the difference of feature importance between subgroups. However, their method needs extra time except for training a precise classification model, while our FairAdaBN is a one-step method.…”
Section: Related Workmentioning
confidence: 99%
“…Deep learning (DL) provides state-of-the-art performance for medical applications such as image segmentation [1]- [4] and disease diagnosis [5], [6] by learning from largescale labeled datasets, without which the performance of DL will significantly degrade [7]. However, medical data exist in isolated medical centers and hospitals [8], and combining a large dataset consisting of very sensitive and private medical data in a single location is impractical and even illegal.…”
Section: Introductionmentioning
confidence: 99%