2020
DOI: 10.1109/tdsc.2020.3006287
|View full text |Cite
|
Sign up to set email alerts
|

How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning

Abstract: This paper firstly considers the research problem of fairness in collaborative deep learning, while ensuring privacy. A novel reputation system is proposed through digital tokens and local credibility to ensure fairness, in combination with differential privacy to guarantee privacy. In particular, we build a fair and differentially private decentralised deep learning framework called FDPDDL, which enables parties to derive more accurate local models in a fair and private manner by using our developed two-stage… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 35 publications
(24 citation statements)
references
References 30 publications
0
24
0
Order By: Relevance
“…For brevity, we use ( , δ)-DP to represent ( , δ)-LDP for the rest of the paper. We remark that all the randomisation mechanisms used for CDP, including Laplace mechanism and Gaussian mechanism (Dwork and Roth, 2014), can be individually used by each party to inject noise into local data to ensure LDP before releasing (Lyu et al, 2020a;Yang et al, 2020;Lyu et al, 2020b;Sun and Lyu, 2020). In particular, we adopt Laplace Mechanism which ensures -DP with δ = 0 throughout the paper.…”
Section: Introductionmentioning
confidence: 99%
“…For brevity, we use ( , δ)-DP to represent ( , δ)-LDP for the rest of the paper. We remark that all the randomisation mechanisms used for CDP, including Laplace mechanism and Gaussian mechanism (Dwork and Roth, 2014), can be individually used by each party to inject noise into local data to ensure LDP before releasing (Lyu et al, 2020a;Yang et al, 2020;Lyu et al, 2020b;Sun and Lyu, 2020). In particular, we adopt Laplace Mechanism which ensures -DP with δ = 0 throughout the paper.…”
Section: Introductionmentioning
confidence: 99%
“…Many works [33][34][35][36][37][38][39][40][41] have tried to address fairness and privacy guarantees together. Kilbertus et al [34] is one of the first proposals that addressed the need for combining fairness requirements with privacy guarantees.…”
Section: Related Workmentioning
confidence: 99%
“…The FairFace dataset [65] is a collection of ≈100 thousand facial images extracted from the YFCC-100M Flickr dataset [165]. Automated models trained on FairFace can exploit age group (age ranges of [0-2], [3][4][5][6][7][8][9], [10][11][12][13][14][15][16][17][18][19], [20][21][22][23][24][25][26][27][28][29], [30][31][32][33][34][35][36][37][38][39], [40][41][42][43][44][45][46][47][48][49], [50]…”
Section: The Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…These models allow both the private data owner and the external model owner to encrypt their data and model respectively and perform secure training and model inferencing in a distributed environment, without needing to trust any particular entity. Another approach suggested by researchers in [39] uses differentially private generative adversarial network (GAN) to generate secret tokens for detecting malicious attackers, and differentially private stochastic gradient descent to handle privacy leakage. Because FL requires distributed federated nodes to communicate in a secure and privacy-oriented way, researchers have proposed a compression technique for efficient communication, and additive HE and DP for data and model security and privacy [40] .…”
Section: Introductionmentioning
confidence: 99%