2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01864
|View full text |Cite
|
Sign up to set email alerts
|

LARGE: Latent-Based Regression through GAN Semantics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…To investigate how GAN capture high-order relations, we study the encoding of important semantics in the latent space of GAN. GANs have been extremely successful at encoding important semantics about data in entirely unsupervised experimental setup [41]. It has been shown that several semantics emerge in the latent space of GAN during training [42].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To investigate how GAN capture high-order relations, we study the encoding of important semantics in the latent space of GAN. GANs have been extremely successful at encoding important semantics about data in entirely unsupervised experimental setup [41]. It has been shown that several semantics emerge in the latent space of GAN during training [42].…”
Section: Methodsmentioning
confidence: 99%
“…It has been shown that several semantics emerge in the latent space of GAN during training [42]. The semantic encoding illustrates smooth and linear directions that affect properties of data (time-points and cell-lineages) w.r.t latent space variables [41]. Researchers have emphasized the importance of establishing interpretable connections between GAN’s latent space variables and meaningful data semantics.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…These models transform a certain vector space, denoted as the latent space, which can be learned as done in StyleGAN [101][102][103] or GLIDE [104], into the space of desired generated images. The latent space has desirable properties, which allow performing manipulations on it, that can lead to solving inverse problems [105][106][107][108][109][110][111], to having a good representation for classification [112][113][114][115][116][117], or to intuitive image editing [118][119][120][121][122][123][124]. Specifically, for the classification task, the latent space of these models has been used for performing regression of some properties such as age or face pose from a small number of examples [113], for getting a consistent semantic segmentation of parts in generated objects for improving performance on real data [114][115][116], or for efficiently getting data annotation that leads to more efficient training [117].…”
Section: Generative Modelsmentioning
confidence: 99%