2020
DOI: 10.48550/arxiv.2010.09125
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering

Abstract: Differentiable rendering has paved the way to training neural networks to perform "inverse graphics" tasks such as predicting 3D geometry from monocular photographs. To train high performing models, most of the current approaches rely on multi-view imagery which are not readily available in practice. Recent Generative Adversarial Networks (GANs) that synthesize images, in contrast, seem to acquire 3D knowledge implicitly during training: object viewpoints can be manipulated by simply manipulating the latent co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(22 citation statements)
references
References 39 publications
0
22
0
Order By: Relevance
“…However, given the large degrees of freedom of such representation and the noisy signals from adversarial training, the output shapes of their work suffer from strong distortion. Concurrent with our work, Zhang et al [33] and Pan et al [21] have utilized StyleGAN to generate multiview synthetic data for 3D reconstruction tasks. Zhang et al [33] conduct manual annotation on offline-generated data while Pan et al [21] propose to iteratively synthesize data and train the reconstruction network.…”
Section: Unsupervised 3d Reconstruction and Generation From 2d Imagesmentioning
confidence: 74%
See 2 more Smart Citations
“…However, given the large degrees of freedom of such representation and the noisy signals from adversarial training, the output shapes of their work suffer from strong distortion. Concurrent with our work, Zhang et al [33] and Pan et al [21] have utilized StyleGAN to generate multiview synthetic data for 3D reconstruction tasks. Zhang et al [33] conduct manual annotation on offline-generated data while Pan et al [21] propose to iteratively synthesize data and train the reconstruction network.…”
Section: Unsupervised 3d Reconstruction and Generation From 2d Imagesmentioning
confidence: 74%
“…Concurrent with our work, Zhang et al [33] and Pan et al [21] have utilized StyleGAN to generate multiview synthetic data for 3D reconstruction tasks. Zhang et al [33] conduct manual annotation on offline-generated data while Pan et al [21] propose to iteratively synthesize data and train the reconstruction network. Different from their work, our work builds a 3D generative model by simultaneously learning to manipulate StyleGAN2 generation and estimate 3D shapes.…”
Section: Unsupervised 3d Reconstruction and Generation From 2d Imagesmentioning
confidence: 74%
See 1 more Smart Citation
“…Besides, generative adversarial networks have proven successful at generating photo-realistic images in many domains [63]. Recent work on neural rendering and inverse graphics [142] suggests that those generative models learn representations in which geometry, light and texture can be disentangled. Such approaches could be used to generate new synthetic data samples that allow for larger intrinsic decomposition datasets.…”
Section: Enhancing Generalizationmentioning
confidence: 99%
“…GAN's latent space or works directly with GAN-generated images. Careful modifications of the latent embeddings then translate to desired changes in generated output, allowing, for example, to coherently change facial expressions in portraits [9][10][11][12][13][14][15][16], change viewpoint or shapes and textures of cars [17], or to interpolate between different images in a semantically meaningful manner [18][19][20][21].…”
mentioning
confidence: 99%