Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security 2017
DOI: 10.1145/3133956.3133982
|View full text |Cite
|
Sign up to set email alerts
|

Practical Secure Aggregation for Privacy-Preserving Machine Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
1,721
0
4

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
3

Relationship

1
9

Authors

Journals

citations
Cited by 2,377 publications
(1,928 citation statements)
references
References 34 publications
3
1,721
0
4
Order By: Relevance
“…On the other hand, from a technical point of view, how to store, query, and process data such that there is no privacy concerns for building deep learning systems has now become an even more difficult but interesting challenge. Building a privacy-preserving algorithm requires to combine cryptography and deep learning together and to mix techniques from a wide range of subjects such as data analysis, distributed computing, federated learning, differential privacy, in order to achieve models with strong security, fast run time, and great generalizability (Dwork and Roth, 2014;Abadi et al, 2016;Bonawitz et al, 2017;Ryffel et al, 2018). In this respect, Papernot (2018) published a report for guidance, which summarized a set of best practices for improving the privacy and security of machine learning systems.…”
Section: Future Workmentioning
confidence: 99%
“…On the other hand, from a technical point of view, how to store, query, and process data such that there is no privacy concerns for building deep learning systems has now become an even more difficult but interesting challenge. Building a privacy-preserving algorithm requires to combine cryptography and deep learning together and to mix techniques from a wide range of subjects such as data analysis, distributed computing, federated learning, differential privacy, in order to achieve models with strong security, fast run time, and great generalizability (Dwork and Roth, 2014;Abadi et al, 2016;Bonawitz et al, 2017;Ryffel et al, 2018). In this respect, Papernot (2018) published a report for guidance, which summarized a set of best practices for improving the privacy and security of machine learning systems.…”
Section: Future Workmentioning
confidence: 99%
“…In a world permeated by smart devices with tremendous computing power and ubiquitous network access, such an approach could soon be poised to combine the above ideas into a powerful global knowledge extraction "organism", which is the underlying idea of Google's new federated learning approach [85]. In a recent work they trained a deep neural network (for an overview of deep learning in neural networks refer to: [3]) in a federated learning model by application of distributed gradient descent across user-held training data on mobile devices [86], which is a current hot topic [87].…”
Section: Knowledge Extraction (Ke)mentioning
confidence: 99%
“…Bonawitz et al demonstrate SECAGG, a practical protocol for secure aggregation in the federated learning setting, achieving < 2× communication expansion while tolerating up to 1 3 user devices dropping out midway through the protocol and while maintaining security against an adversary with malicious control of up to 1 3 of the user devices and full visibility of everything happening on the server [6]. The key idea in SECAGG is to have each pair of users agree on randomly sampled 0-sum pairs of mask vectors of the same lengths as the model updates.…”
Section: B Secure Aggregationmentioning
confidence: 99%