2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00061
|View full text |Cite
|
Sign up to set email alerts
|

Front2Back: Single View 3D Shape Reconstruction via Front to Back Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(21 citation statements)
references
References 31 publications
0
21
0
Order By: Relevance
“…In order to include symmetric prior inside a network, we have to know the symmetry correspondence of the 3D output. To find the symmetry correspondences in view-centric coordinates, symmetry plane detection is essential [41]. In a view-centric coordinate, the global reflection symmetry plane of an object comes to vary along with the change of input view.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to include symmetric prior inside a network, we have to know the symmetry correspondence of the 3D output. To find the symmetry correspondences in view-centric coordinates, symmetry plane detection is essential [41]. In a view-centric coordinate, the global reflection symmetry plane of an object comes to vary along with the change of input view.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…To find the symmetry correspondence, these methods need either camera pose or symmetry plane as given inputs. References [40,41] proposes symmetry detection methods for 3D reconstruction. On the other hand, reference [39] propose an implicit function for 3D reconstruction and find the local details by projecting 3D points onto 2D image and applying symmetry fusion.…”
Section: Introductionmentioning
confidence: 99%
“…A very few works in the literature look into cameraframe object reconstructions. Yao et al [24] presented a method which estimates the object symmetry plane and predicts front and back orthographic views. Their method, trained separately for every new object class, uses a GAN component which requires a large number of images for training.…”
Section: Related Workmentioning
confidence: 99%
“…It has the advantage of being more detail preserving than that of directly regressing the 3-D shapes. For example, the recent work of front2back [47] predicts depth map from an input image, and predict the invisible sides with symmetry. However, their output is point cloud, which requires additional non-trivial post-processing steps [2,12] to deliver the final 3-D mesh.…”
Section: Single Image Based Object Reconstructionmentioning
confidence: 99%