Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency 2021
DOI: 10.1145/3442188.3445879
|View full text |Cite
|
Sign up to set email alerts
|

Can You Fake It Until You Make It?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 26 publications
0
14
0
Order By: Relevance
“…This could have practical benefits when data sharing to protect personally identifiable information (PII) while achieving high quality performance. Such an approach would of course be associated with its own risks, some of which are discussed by Cheng et al (2021).…”
Section: Discussionmentioning
confidence: 99%
“…This could have practical benefits when data sharing to protect personally identifiable information (PII) while achieving high quality performance. Such an approach would of course be associated with its own risks, some of which are discussed by Cheng et al (2021).…”
Section: Discussionmentioning
confidence: 99%
“…In the meantime, differential privacy has been shown to amplify the fairness issues in the original data [118]. [119] demonstrate that differential privacy does not introduce unfairness into the data generation process or to standard group fairness measures in the downstream classification models, but does unfairly increase the influence of majority subgroups. Differential privacy also significantly reduces the quality of the images generated from the GANs, which decrease the utility of the synthetic data in downstream tasks.…”
Section: Fairnessmentioning
confidence: 90%
“…Very recently, synthetic data generation methods have begun to adopt fairness as one of the key elements of equitable data generation. Differentially private GANs attempt to preserve privacy and mitigate fairness but have shown to lead to decreased synthetic image quality and utility [27]. Others measured synthetic data fairness by developing models on the data and then calculated fairness metrics like demographic parity, equality of odds etc.…”
Section: Related Workmentioning
confidence: 99%
“…They too found that models for differentially private data result in more disparity and poorer fairness results. In [27], the authors looked at group fairness metrics like parity gap, specificity gap, recall gap, etc. on a downstream model generated on the synthetic data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation