2021
DOI: 10.48550/arxiv.2106.02484
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NeuraCrypt: Hiding Private Health Data via Random Neural Networks for Public Training

Abstract: Balancing the needs of data privacy and predictive utility is a central challenge for machine learning in healthcare. In particular, privacy concerns have led to a dearth of public datasets, complicated the construction of multi-hospital cohorts and limited the utilization of external machine learning resources. To remedy this, new methods are required to enable data owners, such as hospitals, to share their datasets publicly, while preserving both patient privacy and modeling utility. We propose NeuraCrypt 2 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 27 publications
0
4
0
Order By: Relevance
“…Generally, privacy-preserving machine learning considers privacy in the whole machine learning pipeline, i.e., the (1) privacy of datasets, (2) privacy of models, and (3) privacy of models' outputs [6]. To address privacy, there are various methods such as cryptographic methods [18][19][20], federated learning [7,21,22], differential privacy [23][24][25], image encoding methods [13,14,[26][27][28]. As we focus on the privacy of datasets for image classification tasks, we review learnable image encryption, image encoding methods, and isotropic networks that can be used to classify visually protected images in the following subsections.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally, privacy-preserving machine learning considers privacy in the whole machine learning pipeline, i.e., the (1) privacy of datasets, (2) privacy of models, and (3) privacy of models' outputs [6]. To address privacy, there are various methods such as cryptographic methods [18][19][20], federated learning [7,21,22], differential privacy [23][24][25], image encoding methods [13,14,[26][27][28]. As we focus on the privacy of datasets for image classification tasks, we review learnable image encryption, image encoding methods, and isotropic networks that can be used to classify visually protected images in the following subsections.…”
Section: Related Workmentioning
confidence: 99%
“…However, it has been proved that visual information can be reconstructed from the encoded images by an attack method in [30]. Recently, random neural network methods, such as NeuraCrypt [28], have been proposed with Vision Transformer (ViT) [31] to encode images, but the security of this method is risky since the encoded images and plain images can be matched correctly by an algorithm in [32].…”
Section: Image Encoding Approachesmentioning
confidence: 99%
“…However, it was demonstrated that InstaHide-encoded images can be reconstructed [12]. Similarly, NeuraCrypt encodes images by using a random neural network with positional encoding [11]. However, an attack on NeuraCrypt was also released to match encoded images and plain ones completely (NeuraCrypt Challenge 1) [13].…”
Section: Related Work a Privacy-preserving Image Classificationmentioning
confidence: 99%
“…Therefore, the number of parameters will be increased if the number of blocks in an image is increased. In the same line of research, other lightweight encoding schemes such as [9]- [11] have also been put forward. Lightweight encoding schemes have security concerns as described in [12], [13].…”
Section: Introductionmentioning
confidence: 99%