2021
DOI: 10.1007/s12083-021-01076-8
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-preserving neural networks with Homomorphic encryption: Challenges and opportunities

Abstract: Classical machine learning modeling demands considerable computing power for internal calculations and training with big data in a reasonable amount of time. In recent years, clouds provide services to facilitate this process, but it introduces new security threats of data breaches. Modern encryption techniques ensure security and are considered as the best option to protect stored data and data in transit from an unauthorized third-party. However, a decryption process is necessary when the data must be proces… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
43
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 69 publications
(44 citation statements)
references
References 78 publications
1
43
0
Order By: Relevance
“…Homomorphic encryption schemes can in general be partitioned into three main sub-groups, i.e. [248], [257]:…”
Section: Homomorphic Encryption Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…Homomorphic encryption schemes can in general be partitioned into three main sub-groups, i.e. [248], [257]:…”
Section: Homomorphic Encryption Techniquesmentioning
confidence: 99%
“…Homomorphic encryption is relevant also beyond the field of biometrics. The reader is referred to some of the existing surveys, e.g., [257], [265], [266] for more detailed information on this topic.…”
Section: Homomorphic Encryption Techniquesmentioning
confidence: 99%
“…Consequently HE, contrarily to anonymization and perturbation techniques, entirely preserves utility and, contrarily to other simple distributed protocols techniques, automatically guarantees preserving the privacy of the single individuals. The major drawback of HE is its high computational overhead and the limitations for some operations [72,73]. In particular, three possible approaches are defined: Partially, Somewhat, and Fully HE.…”
Section: Related Workmentioning
confidence: 99%
“…For our purpose, namely ML-related applications, Somewhat HE is the most exploited approach, since it delivers the best trade-off [71] for our application. HE, on one hand, allows ensuring the privacy of the single individual, especially in the commonly adopted case where the computing and memory resources are outsourced to a third-party service provider, but on the other hand dramatically increases the computational requirements and reduces the possible network architectural choices [72,73]. Algorithmic Fairness deals with the problem of ensuring that the learned ML does not discriminate subgroups in the population using pre-in-and post-processing methods [26,74].…”
Section: Introductionmentioning
confidence: 99%
“…Numerous open-source homomorphic encryption libraries have arisen in recent years to facilitate the development and use of homomorphic encryption for AI models, including SEAL, HElib, TFHE, PALISADE, cuHE, HEAAN, and HE-transformer [16]. Microsoft's Simple Encrypted Arithmetic Library (SEAL) [17], with support for the Brakerski/Fan-Vercauteren (BFV) [18] and Cheon-Kim-Kim-Song (CKKS) [19] schemes, and IBM's HE-Lib [20] based on the Brakerski-Gentry-Vaikuntanathan (BGV) [21] scheme, are two of the most widely used libraries for building privacy-preserving neural network inference applications.…”
Section: Privacy Preservationmentioning
confidence: 99%