2020
DOI: 10.1007/978-3-030-58604-1_6
|View full text |Cite
|
Sign up to set email alerts
|

Novel View Synthesis on Unpaired Data by Conditional Deformable Variational Auto-Encoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…Zhu et al introduced CycleGAN, a method capable of recovering the front face from a single profile postural facial image, even when the source domain does not match the target domain [59]. This approach is based on a conditional variational autoencoder and generative adversarial network (cVAE-GAN) framework, which does not require paired data, making it a versatile method for view translation [60].Shen et al proposed Pairwise-GAN, employing two parallel U-Nets as generators and PatchGAN as a discriminator to synthesize frontal face images [61]. Similarly, Chan et al presented pi-GAN, a method utilizing periodic implicit Generative Adversarial Networks for high-quality 3D-aware image synthesis [62].…”
Section: Pose Manipulationmentioning
confidence: 99%
“…Zhu et al introduced CycleGAN, a method capable of recovering the front face from a single profile postural facial image, even when the source domain does not match the target domain [59]. This approach is based on a conditional variational autoencoder and generative adversarial network (cVAE-GAN) framework, which does not require paired data, making it a versatile method for view translation [60].Shen et al proposed Pairwise-GAN, employing two parallel U-Nets as generators and PatchGAN as a discriminator to synthesize frontal face images [61]. Similarly, Chan et al presented pi-GAN, a method utilizing periodic implicit Generative Adversarial Networks for high-quality 3D-aware image synthesis [62].…”
Section: Pose Manipulationmentioning
confidence: 99%
“…Paired data of the source and target views are commonly required for novel view synthesis, thus raising the threshold of data acquisition. To address this challenge, Yin et al presented an unpaired view translation framework that used cVAE‐GAN to decompose the features of source views and control the generation of target views through view condition vectors 29 . Furthermore, Palazzi et al proposed a self‐supervised and semiparametric method (a fusion of an entirely learning‐based generative network and a not learned priori geometric knowledge component) that can generate novel views of a vehicle from a single monocular image 30 .…”
Section: Related Workmentioning
confidence: 99%
“…2D transformation-based methods mainly focus on learning pixel displacement between the input source view(s) and the target view [8]- [10] or directly regressing the pixel colors of the target view in its 2D image plane [11]- [13]. 3D transformation-based methods [14]- [15] often predict a 3D representation, such as an occupancy volume, first and then explicitly perform 3D spatial transformation on the representation to synthesize the target view.…”
Section: Related Workmentioning
confidence: 99%