2021
DOI: 10.1088/1742-6596/2089/1/012012
|View full text |Cite
|
Sign up to set email alerts
|

Image Anonymization using Deep Convolutional Generative Adversarial Network

Abstract: Advancement in deep learning requires significantly huge amount of data for training purpose, where protection of individual data plays a key role in data privacy and publication. Recent developments in deep learning demonstarte a huge challenge for traditionally used approch for image anonymization, such as model inversion attack, where adversary repeatedly query the model, inorder to reconstrut the original image from the anonymized image. In order to apply more protection on image anonymization, an approach… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 3 publications
0
2
0
Order By: Relevance
“…These case-based explanation methods do not perform compression. Another application of a GAN for image anonymization lacks explainability but uses compression to reduce computation (Rao et al, 2021). An approach for classification is to learn linear surrogate models for each class by training them with a differential privacy loss (Harder et al, 2020).…”
Section: Dataset Anonymization For Imagesmentioning
confidence: 99%
“…These case-based explanation methods do not perform compression. Another application of a GAN for image anonymization lacks explainability but uses compression to reduce computation (Rao et al, 2021). An approach for classification is to learn linear surrogate models for each class by training them with a differential privacy loss (Harder et al, 2020).…”
Section: Dataset Anonymization For Imagesmentioning
confidence: 99%
“…Rao et al [8] presented a study in 2021 that proposed an algorithm to transform the input image matrix into a new output image by applying better noise to the latent space parameters of the original image (LSR). The identity of the synthetic image was concealed by using welldesigned noise computed on the gradient during the learning process, resulting in a realistic image that was resistant to inversion attacks.…”
Section: Related Workmentioning
confidence: 99%