Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization 2020
DOI: 10.1145/3386392.3399568
|View full text |Cite
|
Sign up to set email alerts
|

Fair Inputs and Fair Outputs: The Incompatibility of Fairness in Privacy and Accuracy

Abstract: Fairness concerns about algorithmic decision-making systems have been mainly focused on the outputs (e.g., the accuracy of a classifier across individuals or groups). However, one may additionally be concerned with fairness in the inputs. In this paper, we propose and formulate two properties regarding the inputs of (features used by) a classifier. In particular, we claim that fair privacy (whether individuals are all asked to reveal the same information) and need-to-know (whether users are only asked for the … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…Several works are concerned with implementing (Goldsteen et al 2021) or auditing compliance with this principle (Rastegarpanah, Gummadi, and Crovella 2021). Rastegarpanah et al (Rastegarpanah, Crovella, and Gummadi 2020) consider decision systems that can handle optional features from a data minimization perspective where the decision maker decides which features are collected for each individual. This principle is distinct from the "right to be forgotten" (Biega et al 2020), which enables individuals to submit requests to have their data deleted.…”
Section: Related Workmentioning
confidence: 99%
“…Several works are concerned with implementing (Goldsteen et al 2021) or auditing compliance with this principle (Rastegarpanah, Gummadi, and Crovella 2021). Rastegarpanah et al (Rastegarpanah, Crovella, and Gummadi 2020) consider decision systems that can handle optional features from a data minimization perspective where the decision maker decides which features are collected for each individual. This principle is distinct from the "right to be forgotten" (Biega et al 2020), which enables individuals to submit requests to have their data deleted.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, fair models trained with the constraint of group fairness are more vulnerable to membership inference attacks [6]. The tradeoffs and (in)compatibility of fairness, privacy and accuracy have been theoretically studied [9,37].…”
Section: Related Workmentioning
confidence: 99%
“…Rastegarpanah, Crovella and Gummadi [3] looked into the fairness notions for algorithmic decision-making systems. The authors demonstrate that these systems should expand to incorporate the inputs used by a system.…”
Section: Accepted Papersmentioning
confidence: 99%