2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00872
|View full text |Cite
|
Sign up to set email alerts
|

GraphX-Convolution for Point Cloud Deformation in 2D-to-3D Conversion

Abstract: In this paper, we present a novel deep method to reconstruct a point cloud of an object from a single still image. Prior arts in the field struggle to reconstruct an accurate and scalable 3D model due to either the inefficient and expensive 3D representations, the dependency between the output and number of model parameters or the lack of a suitable computing operation. We propose to overcome these by deforming a random point cloud to the object shape through two steps: feature blending and deformation. In the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(19 citation statements)
references
References 26 publications
0
19
0
Order By: Relevance
“…Recently, due to the establishment of large-scale 3D model datasets, such as ShapeNet [2] and ModelNet [62], it is popular to reconstruct complete 3D shape from a single image by utilizing shape priors modeled by deep neural networks. It also achieved various degrees of success by designing 3D shape decoders tailored for different shape representations including voxel [7,14], point cloud [10,32], mesh [13,36,48], and implicit field [4,28,38,41,48,49]. The voxel decoders [7,61] take advantages of conventional 3D convolution operations to generate volumetric grids.…”
Section: Single-view 3d Object Reconstructionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, due to the establishment of large-scale 3D model datasets, such as ShapeNet [2] and ModelNet [62], it is popular to reconstruct complete 3D shape from a single image by utilizing shape priors modeled by deep neural networks. It also achieved various degrees of success by designing 3D shape decoders tailored for different shape representations including voxel [7,14], point cloud [10,32], mesh [13,36,48], and implicit field [4,28,38,41,48,49]. The voxel decoders [7,61] take advantages of conventional 3D convolution operations to generate volumetric grids.…”
Section: Single-view 3d Object Reconstructionmentioning
confidence: 99%
“…As a complement, we introduce surface-aligned features that capture fine details of surface to facilitate radiance field learning. Given an input image I, we first regress a sparse point cloud S of size 1024 from I using a point set generator G S based on GraphX-convolutions [32]. Then, we feed the generated point cloud to a PointNet++ [42] network to extract pointwise features F S .…”
Section: Feature Extractionmentioning
confidence: 99%
“…Deep-learning based methods are the most preferred approach for end-to-end inference of 3D geometry from a single image [32]. [33] proposed a convolutional neural network (CNN) to decompose facial images into albedo, shading, and normal maps.…”
Section: Related Workmentioning
confidence: 99%
“…Recent advances in graph convolution networks [14] suggest that graph representations could provide better features for point cloud processing. Such a representation already outperforms the state-of-the-art in many other computer vision tasks [32,19,28,34]. Thus, we investigate the use of a graph representation for LiDAR point cloud in the task of 3D vehicle detection.…”
Section: R-gcn C-gcnmentioning
confidence: 99%
“…GCNs have an increased receptive field with respect to PointNet++ [24] since the edge connections are dynamically assigned for each intermediate layer. With such features, GCNs have shown success in tasks such as segmentation [32], point cloud deformation [19], adversarial generation [28] and point cloud registration [34]. However, To the best of our knowledge, we introduce the first 3D vehicle detection module using a graph representation on LiDAR input.…”
Section: Related Workmentioning
confidence: 99%