2022
DOI: 10.48550/arxiv.2202.02242
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks

Abstract: Machine learning (ML) models have been deployed for high-stakes applications (e.g., criminal justice system). Due to class imbalance in the sensitive attribute observed in the datasets, ML models are unfair on minority subgroups identified by a sensitive attribute, such as Race and Sex. Fairness algorithms, specially in-processing algorithms, ensure model predictions are independent of sensitive attribute for fair classification across different subgroups (e.g., male and female; white and non-white). Furthermo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 24 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?