2019
DOI: 10.1007/978-3-030-11015-4_50
|View full text |Cite
|
Sign up to set email alerts
|

3D-PSRNet: Part Segmented 3D Point Cloud Reconstruction from a Single Image

Abstract: We propose a mechanism to reconstruct part annotated 3D point clouds of objects given just a single input image. We demonstrate that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually. The key idea is to propagate information from each task so as to aid the other during the training procedure. Towards this end, we introduce a location-aware segmentation loss in the training regime. We empirically show t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
98
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 98 publications
(99 citation statements)
references
References 20 publications
0
98
0
1
Order By: Relevance
“…• Point set representation treats a point cloud as a matrix of size N × 3 [21], [22], [68], [71], [73], [77]. • One or multiple 3-channel grids of size H × W × 3 [68], [69], [78].…”
Section: Representationsmentioning
confidence: 99%
See 1 more Smart Citation
“…• Point set representation treats a point cloud as a matrix of size N × 3 [21], [22], [68], [71], [73], [77]. • One or multiple 3-channel grids of size H × W × 3 [68], [69], [78].…”
Section: Representationsmentioning
confidence: 99%
“…Point set representations ( Fig. 3-(c)) use fully connected layers [21], [68], [70], [73], [77] since point clouds are unordered. The main advantage of fully-connected layers is that they capture the global information.…”
Section: Network Architecturesmentioning
confidence: 99%
“…There has been much 3D model generation work based on the point cloud. For example, Mandikal et al [ 26 ] propose 3DLMNet, in which a latent space is learnt by an auto-encoder network, and the image is encoded into the space. With the constraints of difference between the feature of the image and the feature of the point cloud, a space vector that can generate an image of a 3D model is obtained.…”
Section: Related Workmentioning
confidence: 99%
“…Regressing 3D coordinates from 2D observations is a well-studied problem in computer vision [16]. While traditional approaches generate 3D point clouds from multi-view RGB images [51,54], recent works predict unstructured 3D point clouds from a single RGB image using deep learning [12,35,38]. Complementary to these works, Point-Net [42] and successors [29,43] showed that even such unstructured 3D point clouds can be used to address various 3D vision tasks with deep learning.…”
Section: Location Fieldsmentioning
confidence: 99%