Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation 2019
DOI: 10.1145/3314221.3314628
|View full text |Cite
|
Sign up to set email alerts
|

CHET: an optimizing compiler for fully-homomorphic neural-network inferencing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
184
0
4

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 164 publications
(188 citation statements)
references
References 13 publications
0
184
0
4
Order By: Relevance
“…Handwriting efficient HE kernels is a tedious and error-prone process as HE provides limited instructions, intra-ciphertext data movement must be done using vector rotation, and the noise budget adds additional sources of error. As a result, HE code is today is typically written by experts [16,27,40]. Figure 2.…”
Section: He Compilation Challengesmentioning
confidence: 99%
See 2 more Smart Citations
“…Handwriting efficient HE kernels is a tedious and error-prone process as HE provides limited instructions, intra-ciphertext data movement must be done using vector rotation, and the noise budget adds additional sources of error. As a result, HE code is today is typically written by experts [16,27,40]. Figure 2.…”
Section: He Compilation Challengesmentioning
confidence: 99%
“…A nascent body of prior work exists and has investigated specific aspects of compiling HE code. For example, prior work has shown HE parameter tuning, which determines the noise budget, can be automated and optimized to improve performance [3,12,15,16]. Others have proposed mechanisms to optimize data layouts for neural networks [16].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To predict the output value, we need to obtain β = [b (α • y) T ] T . Without decryption, we can obtain β by using matrix A, and the inner product between the kernel value and β induces the SVM function (9).…”
Section: Prediction Phasementioning
confidence: 99%
“…Using FHE, multiple institutes can share their data in an encrypted form and evaluate machine learning algorithms on them. Several works have been conducted on secure computations in machine learning algorithms with fully homomorphic encryptions: prediction phases [2]- [4] and training phases [5], [6] of the Logistic Regression Model (LRM), decision trees inference phases [7], and deep neural network inference phases [8], [9]. Notably, few works have been reported that propose private training for support vector machine (SVM) algorithms using FHE because the SVM training model has many constraints and non-polynomial functions.…”
Section: Introductionmentioning
confidence: 99%