2018
DOI: 10.48550/arxiv.1807.07796
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

3D-LMNet: Latent Embedding Matching for Accurate and Diverse 3D Point Cloud Reconstruction from a Single Image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
43
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(44 citation statements)
references
References 0 publications
1
43
0
Order By: Relevance
“…Table 1 shows the quantitative comparison between 3D-R2N2 [4], PSGN [6], 3D-LMNet [18] and our proposed method. 3D-R2N2 takes as an input one or more images of an object taken from different viewpoints.…”
Section: Methodsmentioning
confidence: 99%
“…Table 1 shows the quantitative comparison between 3D-R2N2 [4], PSGN [6], 3D-LMNet [18] and our proposed method. 3D-R2N2 takes as an input one or more images of an object taken from different viewpoints.…”
Section: Methodsmentioning
confidence: 99%
“…To relieve such a problem, recently, some works [42,45,62,7] aim to estimate the truncated signed distance fields to preserve more details. Both voxel-based and point-based methods [13,41,20] require implicit surface reconstruction methods to generate the final triangle mesh. The above methods require ground-truth 3D models, which are difficult to obtain for real scenes.…”
Section: Related Workmentioning
confidence: 99%
“…There has been a lot of research on single image reconstruction task. Recent works involve 3D representation learning, including points [6,13,18], voxels [5,30], meshes [8,26,27,7] and primitives [21,25,29]. The representation can also be learned without the underlying ground truth 3D shapes [11,16,14,33,9,13].…”
Section: Related Workmentioning
confidence: 99%