2020
DOI: 10.1109/tifs.2019.2939713
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Collaborative Deep Learning With Unreliable Participants

Abstract: With powerful parallel computing GPUs and massive user data, neural-network-based deep learning can well exert its strong power in problem modeling and solving, and has archived great success in many applications such as image classification, speech recognition and machine translation etc. While deep learning has been increasingly popular, the problem of privacy leakage becomes more and more urgent. Given the fact that the training data may contain highly sensitive information, e.g., personal medical records, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 170 publications
(63 citation statements)
references
References 42 publications
0
62
0
1
Order By: Relevance
“…• Accuracy score calculation of individual trainers could be an interesting approach to explore in order to increase the global training efficiency. In [56], the approach was proposed to reduce the impact of participants with low-quality data on the training process.…”
Section: Discussion and Learned Lessonsmentioning
confidence: 99%
See 2 more Smart Citations
“…• Accuracy score calculation of individual trainers could be an interesting approach to explore in order to increase the global training efficiency. In [56], the approach was proposed to reduce the impact of participants with low-quality data on the training process.…”
Section: Discussion and Learned Lessonsmentioning
confidence: 99%
“…It considers that any assumed semi-honest entity (cloud, data owners, ...) follows honestly the security protocol without performing malicious actions towards protocol or participants, but it could try to learn or infer sensitive information from private data, potentially colluding with some participants [3,20,23]. However, some reviewed works also adopted some scenarios of active adversary model [29,56] where an adversary could deviate from the protocol in an arbitrary way. It is worth noting that in case of some scenarios, adversaries may have additional capabilities and knowledge than others.…”
Section: Adversary Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Noise could be added to the weights in each iteration of training. However, this method might [67] strong low Gradient [65], [68], [69] strong low Weights [70], [71] very strong very high Classes [72], [73], [74] very strong low affect convergence, since the output of the algorithm is computed based on the weights. Hence, if noise is added to each weight, the total amount of noise might become large enough to make the loss never convergent.…”
Section: Differential Privacy In Deep Neural Networkmentioning
confidence: 99%
“…Several communication efficient and privacy preservation distributed approaches have been proposed recently including Practical Secure Aggregation (PSA) [41], Federated Extreme Boosting (XGB) [42], Efficient and Privacy-Preserving Federated Deep Learning (EPFDL) [43] and Privacy-preserving collaborative learning (PPCL) [44] as listed in Table 3. Specifically, PSA and XGB utilise collaborative training in order to resist collusion among adversaries, but both approaches do not guarantee communication efficiency.…”
Section: Functional Comparisonmentioning
confidence: 99%