2018
DOI: 10.1109/tifs.2017.2787987
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Deep Learning via Additively Homomorphic Encryption

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
322
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 975 publications
(398 citation statements)
references
References 17 publications
0
322
0
1
Order By: Relevance
“…One approach with HE has been proposed for privacy-preserving weight transmission for multiple owners who wish to apply a machine learning method over combined data sets [21,22]. However, this approach can not be applied to network training in the encrypted domain.…”
Section: Privacy-preserving Machine Learningmentioning
confidence: 99%
“…One approach with HE has been proposed for privacy-preserving weight transmission for multiple owners who wish to apply a machine learning method over combined data sets [21,22]. However, this approach can not be applied to network training in the encrypted domain.…”
Section: Privacy-preserving Machine Learningmentioning
confidence: 99%
“…However, although the data is not explicitly shared in the original format, it is still possible for adversaries to reconstruct the raw data approximately, especially when the architecture and parameters are not completely protected. In addition, FL can expose intermediate results such as parameter updates from an optimization algorithm like stochastic gradient descent (SGD), and the transmission of these gradients may actually leak private information [9] when exposed together with a data structure such as image pixels. In addition, the existence of malevolent users may induce further security issues.…”
Section: Introductionmentioning
confidence: 99%
“…Adversarial training [94] Very easy to understand and implement It depends upon the sample size in the training phase Scalable and have the ability to handle the complex dataset Defense distillation [80] Sample and have the defense ability Difficult to converge and high complexity Ensemble method [95] Model-independent, good generalization Do not rebut the training data and computation overhead Differential Privacy [96] Preserves the privacy of training and learning data It also affects legitimate data and model-independent Low overhead, low complexity Homomorphic Encryption [97] Maintains security and privacy of data and simple It increases the data size and extensive computation overhead as well as to detect information security threats. Restricted Boltzmann Machines (RBMs) are also used for the same purpose, but we cannot find much study using this technique for security purposes.…”
Section: Countermeasure Methods Advantages Disadvantagesmentioning
confidence: 99%