2019
DOI: 10.1016/j.cag.2019.05.030
|View full text |Cite
|
Sign up to set email alerts
|

Learning multi-view manifold for single image based modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Symmetry 2020, 12, 434 3 of 13 [43]. The Wasserstein-driven low-dimensional manifold model (W-LDMM) can be used for noise estimation, image denoising and noisy image inpainting tasks [44].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Symmetry 2020, 12, 434 3 of 13 [43]. The Wasserstein-driven low-dimensional manifold model (W-LDMM) can be used for noise estimation, image denoising and noisy image inpainting tasks [44].…”
Section: Methodsmentioning
confidence: 99%
“…The weights of the iterative manifold embedding (IME) layer are learned by unsupervised strategy, which has been used to analyze the intrinsic manifolds of data sets with missing data [42]. The distribution of image data in multi-view manifold space can be captured by Multi-view Generative Adversarial Network (GAN), which can map the shape and view manifolds in a lower dimensionality latent space [43]. The Wasserstein-driven low-dimensional manifold model (W-LDMM) can be used for noise estimation, image denoising and noisy image inpainting tasks [44].…”
Section: Introductionmentioning
confidence: 99%
“…The generative models apply deep generative models that apply adversarial training to obtain information from multiple views [14]. These methods try to extract correlations between the views through GANs which has shown promising results on 3D skeletal data [15]. These methods are largely focused on 3D models, where a single image 3D model is applied to GANs to generate a multiple views for recognition using deep networks.…”
Section: Related Backgroundmentioning
confidence: 99%