2020
DOI: 10.2478/popets-2021-0011
|View full text |Cite
|
Sign up to set email alerts
|

Falcon: Honest-Majority Maliciously Secure Framework for Private Deep Learning

Abstract: We propose Falcon, an end-to-end 3-party protocol for efficient private training and inference of large machine learning models. Falcon presents four main advantages – (i) It is highly expressive with support for high capacity networks such as VGG16 (ii) it supports batch normalization which is important for training complex networks such as AlexNet (iii) Falcon guarantees security with abort against malicious adversaries, assuming an honest majority (iv) Lastly, Falcon presents new theoretical insights for pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
156
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 121 publications
(157 citation statements)
references
References 37 publications
0
156
1
Order By: Relevance
“…MiniONN [44], DeepSecure [52] and XONN [50] use optimized garbled circuits [63] that allow very few communication rounds, but they do not support training and alter the neural network structure to speed up execution. Other frameworks such as ShareMind [10], SecureML [46], SecureNN [59], QUOTIENT [2] or more recently FALCON [60] rely on additive secret sharing and allow secure model evaluation and training. They use simpler and more efficient primitives, but require a large number of rounds of communication, such as 11 in [59] or 5 + log 2 (n) in [60] (typically 10 with n = 32) for ReLU.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…MiniONN [44], DeepSecure [52] and XONN [50] use optimized garbled circuits [63] that allow very few communication rounds, but they do not support training and alter the neural network structure to speed up execution. Other frameworks such as ShareMind [10], SecureML [46], SecureNN [59], QUOTIENT [2] or more recently FALCON [60] rely on additive secret sharing and allow secure model evaluation and training. They use simpler and more efficient primitives, but require a large number of rounds of communication, such as 11 in [59] or 5 + log 2 (n) in [60] (typically 10 with n = 32) for ReLU.…”
Section: Related Workmentioning
confidence: 99%
“…Other frameworks such as ShareMind [10], SecureML [46], SecureNN [59], QUOTIENT [2] or more recently FALCON [60] rely on additive secret sharing and allow secure model evaluation and training. They use simpler and more efficient primitives, but require a large number of rounds of communication, such as 11 in [59] or 5 + log 2 (n) in [60] (typically 10 with n = 32) for ReLU. ABY [23], Chameleon [51] and more recently ABY 3 [45], CrypT-Flow [41] and [21] mix garbled circuits, additive or binary secret sharing based on what is most efficient for the operations considered.…”
Section: Related Workmentioning
confidence: 99%
“…Privacy on offloaded computation can also be provided by the means of cryptographic tools such as homomorphic encryption and/or Secure Multiparty Computation (SMC) [9,18,23,30,45,48,54,85]. However, these approaches suffer from a prohibitive computational cost (Table 1), on both the cloud and user side, exacerbating the complexity and compute-intensity of neural networks especially on resource-constrained edge devices.…”
Section: Machine Learning Privacymentioning
confidence: 99%
“…Slalom [10] reaches high performance using Intel SGX-based TEEs. Falcon [14], which instead relies on SMC with secret sharing, leverages an untrusted third party to speed-up the computation and is currently the most computationally efficient framework [15]. However, it relies on the assumption that at least 2 out of 3 parties behave honestly.…”
Section: B Privacy Preserving Data Processingmentioning
confidence: 99%