2021 International Conference on 3D Vision (3DV) 2021
DOI: 10.1109/3dv53792.2021.00139
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Category Mesh Reconstruction From Image Collections

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…This neural model is trained over the same set of photorealistic RGB images of the considered categories and learns how to infer the 3D mesh of unseen instances from a single viewpoint. In this work a box-supervised 3D model reconstructor is developed on top of Multi-Category Mesh Reconstruction (MCMR) [26] as in Fig. 4.…”
Section: D Model Reconstructionmentioning
confidence: 99%
See 1 more Smart Citation
“…This neural model is trained over the same set of photorealistic RGB images of the considered categories and learns how to infer the 3D mesh of unseen instances from a single viewpoint. In this work a box-supervised 3D model reconstructor is developed on top of Multi-Category Mesh Reconstruction (MCMR) [26] as in Fig. 4.…”
Section: D Model Reconstructionmentioning
confidence: 99%
“…It is possible to learn multiple meanshapes and let a classifier select the most suitable one with respect to the image features. In the presented approach, the loss function in [26] is completed with the 3D bounding box supervision by introducing the term…”
Section: D Model Reconstructionmentioning
confidence: 99%
“…Pixel2Mesh [26] treats a mesh as a graph and applies graph convolution [27] for vertex feature extraction and graph unpooling to subdivide the mesh for detailed refinement. Using differentiable mesh rendering [28], [29], the 3D mesh structure of an object can be learned from 2D images [30]- [32]. Mesh R-CNN [33] simultaneously detects objects and reconstructs their 3D mesh shape.…”
Section: Related Workmentioning
confidence: 99%