2021
DOI: 10.48550/arxiv.2107.06304
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Neural Networks are Surprisingly Reversible: A Baseline for Zero-Shot Inversion

Abstract: Understanding the behavior and vulnerability of pre-trained deep neural networks (DNNs) can help to improve them. Analysis can be performed via reversing the network's flow to generate inputs from internal representations. Most existing work relies on priors or data-intensive optimization to invert a model, yet struggles to scale to deep architectures and complex datasets. This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal repre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 39 publications
(69 reference statements)
0
2
0
Order By: Relevance
“…BiGAN (Donahue et al, 2017 ) constructs a generative network and a reverse network that inputs images back to noise, which can be used to obtain a latent representation of a dataset directly. Dong et al ( 2021 ) shows that is possible to reverse neural networks in the case of reconstruction of images.…”
Section: Related Workmentioning
confidence: 99%
“…BiGAN (Donahue et al, 2017 ) constructs a generative network and a reverse network that inputs images back to noise, which can be used to obtain a latent representation of a dataset directly. Dong et al ( 2021 ) shows that is possible to reverse neural networks in the case of reconstruction of images.…”
Section: Related Workmentioning
confidence: 99%
“…Obtaining Z (and Y) requires all clients to upload their features (and corresponding labels). However, in the context of federated learning, sharing features and labels will cause prohibitively expensive communication overheads and potential model inversion attack [17,93]. To achieve efficient and privacy-enhanced calibration, we propose to compute W * in a distributed manner.…”
Section: Fast Federated Calibration (Ffc)mentioning
confidence: 99%