2020 25th International Conference on Pattern Recognition (ICPR) 2021
DOI: 10.1109/icpr48806.2021.9412045
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting the Latent Space of GANs via Correlation Analysis for Controllable Concept Manipulation

Abstract: Generative adversarial nets (GANs) have been successfully applied in many fields like image generation, inpainting, super-resolution and drug discovery, etc., by now, the inner process of GANs is far from been understood. To get deeper insight of the intrinsic mechanism of GANs, in this paper, a method for interpreting the latent space of GANs by analyzing the correlation between latent variables and the corresponding semantic contents in generated images is proposed. Unlike previous methods that focus on diss… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…Unlike the above methods which only factorizes between different attribute‐related codes, our factorization disentangles a latent code into both attribute‐relevant and attribute‐irrelevant codes for each attribute. It is important to note that a recent endeavor to achieve end‐to‐end training of both a latent space factorization module and an image generator has been explored in Reference 17. However, it's noteworthy that the generator utilized in Reference 17 operates on an auto‐encoder framework, resulting in comparatively constrained image quality when compared to a GAN.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike the above methods which only factorizes between different attribute‐related codes, our factorization disentangles a latent code into both attribute‐relevant and attribute‐irrelevant codes for each attribute. It is important to note that a recent endeavor to achieve end‐to‐end training of both a latent space factorization module and an image generator has been explored in Reference 17. However, it's noteworthy that the generator utilized in Reference 17 operates on an auto‐encoder framework, resulting in comparatively constrained image quality when compared to a GAN.…”
Section: Related Workmentioning
confidence: 99%
“…It is important to note that a recent endeavor to achieve end‐to‐end training of both a latent space factorization module and an image generator has been explored in Reference 17. However, it's noteworthy that the generator utilized in Reference 17 operates on an auto‐encoder framework, resulting in comparatively constrained image quality when compared to a GAN. Additionally, the factorization process in Reference 17 is once again reliant on subspace projection, which consequently confines its applicability to disentangling attributes solely between them, akin to the approach described in References 15 and 16. Identity consistency maintenance : For maintaining identity consistency, most recent models 1,3,7,9,18 construct reconstruction loss or use cyclic consistency loss to avoid the network losing face information during the operation.…”
Section: Related Workmentioning
confidence: 99%
“…Counterfactual explanation methods that incorporate disentanglement can be further partitioned to methods that rely on annotated side information of image properties, for example face images with annotated properties such as hair color, mustache, or skin color (He et al 2019, Gabbay et al 2021, Li et al 2020, and to unconditioned methods that do not use any further data annotations beyond the binary classification labels for training the classification model (Lang et al 2021, Higgins et al 2021, Rodríguez et al 2021. DISCOVER benefits from the advantages of both approaches of counterfactual explanations and attribution based methods.…”
Section: Discover Was Designed To Overcome Limitations Of Alternative...mentioning
confidence: 99%
“…To assess its effectiveness in addressing OOD reconstruction, attribute manipulation, and conditional generation, we compare it against a conditional VAE as a baseline. Furthermore, we compare our model with the CsVAE [54], PCVAE, [55] and MSP [56]. CsVAE, like BtVAAE, is based on a conditional VAE model but uses two latent variables to separate the information correlated with the attributes y into a pre-defined subspace.…”
Section: Baselinesmentioning
confidence: 99%