2022
DOI: 10.1109/access.2022.3210134
|View full text |Cite
|
Sign up to set email alerts
|

CHE: Channel-Wise Homomorphic Encryption for Ciphertext Inference in Convolutional Neural Network

Abstract: Privacy-preserving deep learning (PPDL), which leverages the Homomorphic Encryption (HE), has attracted attention as a promising approach to ensure the privacy of deep learning applications' data. While recent studies have developed and evaluated the HE-based PPDL algorithms, the achieved performances, such as accuracy and latency, need improvement to make the applications practical. This work aims to improve the performance of the image classification of HE-based PPDL by combining two approaches -Channel-wise… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…Their innovative approach combined Channel-wise Homomorphic Encryption (CHE) with Batch Normalization (BN) and coefficient merging. Their comprehensive evaluations using the Cheon-Kim-Kim-Song HE scheme on datasets like MNIST and CIFAR-10 revealed significant improvements in both accuracy and latency [5].…”
Section: Homomorphic Encryptionmentioning
confidence: 99%
See 1 more Smart Citation
“…Their innovative approach combined Channel-wise Homomorphic Encryption (CHE) with Batch Normalization (BN) and coefficient merging. Their comprehensive evaluations using the Cheon-Kim-Kim-Song HE scheme on datasets like MNIST and CIFAR-10 revealed significant improvements in both accuracy and latency [5].…”
Section: Homomorphic Encryptionmentioning
confidence: 99%
“…If the gradient value of a split chunk exceeds this threshold, it is considered that the split chunk contains feature information. This criterion helps us distinguish between split chunks with important features and those without, allowing us to focus our computational resources on fragments that are more likely to contribute to the overall image classification [5].…”
Section: Image Splittingmentioning
confidence: 99%
“…If a segment's gradient value surpasses this threshold, it is considered to contain ample feature information; otherwise, it is deemed to lack critical feature information. This criterion aids us in distinguishing segments with significant features from those without, allowing us to concentrate computational resources on segments that are more likely to contribute to the overall image classification [38]. The entire chunking and gradient computation process above is shown in Fig.…”
Section: Image Splittingmentioning
confidence: 99%
“…1) with our contributions as follows. [15], CryptoGCN [20] ✗ ✗ ✓ CryptoNet [5], CryptoDL [7], LoLa [13], CHE [26] ✗ ✗ ✓ F1 [24], CraterLake [25], BTS [10] ✓ ✓ ✗ HEAX [22],Delphi [17],Gazelle [8],Cheetah [21] ✗ ✗ ✓ SHE [14] ✓ 2 Technical Background…”
Section: Our Contributionsmentioning
confidence: 99%
“…Moreover, the continuous streaming data in cloud applications also invalidate the Multi-Party Computation [23] and Garbled Circuits (GC) [9] because both of them rely on prohibitively high communication overhead. To eliminate the overhead of communication as well as the secure data sharing, non-polynomial operators can be approximated by low-degree polynomials for inference [2,5,7,13,18,26]. In the current FHE schemes which most prior arts focus on, non-polynomial operators like ReLU or MaxPooling are not supported.…”
Section: Comparison Of Vgg-19 On Imagenet-1kmentioning
confidence: 99%