2020
DOI: 10.1609/aaai.v34i01.5402
|View full text |Cite
|
Sign up to set email alerts
|

Differentially Private and Fair Classification via Calibrated Functional Mechanism

Abstract: Machine learning is increasingly becoming a powerful tool to make decisions in a wide variety of applications, such as medical diagnosis and autonomous driving. Privacy concerns related to the training data and unfair behaviors of some decisions with regard to certain attributes (e.g., sex, race) are becoming more critical. Thus, constructing a fair machine learning model while simultaneously providing privacy protection becomes a challenging problem. In this paper, we focus on the design of classification mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
41
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 34 publications
(41 citation statements)
references
References 14 publications
0
41
0
Order By: Relevance
“…For the privacy-preserving data analysis, the standard privacy metric, Differential privacy (DP) [9,11], is proposed to measure the privacy risk of each data sample in the dataset, and has already been adopted in many machine learning domains [4,8,18,20,24]. Basically, under DP framework, privacy protection is guaranteed by limiting the difference of the distribution of the output regardless of the value change of any one sample in the dataset.…”
Section: Differential Privacymentioning
confidence: 99%
“…For the privacy-preserving data analysis, the standard privacy metric, Differential privacy (DP) [9,11], is proposed to measure the privacy risk of each data sample in the dataset, and has already been adopted in many machine learning domains [4,8,18,20,24]. Basically, under DP framework, privacy protection is guaranteed by limiting the difference of the distribution of the output regardless of the value change of any one sample in the dataset.…”
Section: Differential Privacymentioning
confidence: 99%
“…Recently, researchers have attempted to adopt differential privacy to simultaneously achieve both fairness and privacy preservation [52], [53]. This research is motivated by settings where models are required to be non-discriminatory in terms of certain attributes, but these attributes may be sensitive and so must be protected while training the model [54].…”
Section: Applying Differential Privacy To Improve Fairnessmentioning
confidence: 99%
“…This research is motivated by settings where models are required to be non-discriminatory in terms of certain attributes, but these attributes may be sensitive and so must be protected while training the model [54]. Addressing fairness and privacy preservation simultaneously is challenging because they have different aims [53], [55]. Fairness focuses on the group level and seeks to guarantee that the model's predictions for a protected group (such as female) are the same as the predictions made for an unprotected group.…”
Section: Applying Differential Privacy To Improve Fairnessmentioning
confidence: 99%
See 1 more Smart Citation
“…For the privacy-preserving data analysis, the standard privacy metric, Dierential privacy (DP) [9,11], is proposed to measure the privacy risk of each data sample in the dataset, and has already been adopted in many machine learning domains [4,8,18,20,24]. Basically, under DP framework, privacy protection is guaranteed by limiting the dierence of the distribution of the output regardless of the value change of any one sample in the dataset.…”
Section: Dierential Privacymentioning
confidence: 99%