2022
DOI: 10.1007/978-3-031-19827-4_6
|View full text |Cite
|
Sign up to set email alerts
|

GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs

Abstract: Image guided depth completion aims to recover per-pixel dense depth maps from sparse depth measurements with the help of aligned color images, which has a wide range of applications from robotics to autonomous driving. However, the 3D nature of sparse-to-dense depth completion has not been fully explored by previous methods. In this work, we propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion. First, unlike previous methods, we leverage convoluti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 21 publications
(3 citation statements)
references
References 53 publications
0
3
0
Order By: Relevance
“…• We observed a similar trend in the setting of T * ⇒ T that RDFC-GAN obtained the best on four of all five metrics. In terms of RMSE, RDFC-GAN without any iteration processing was only lower than NLSPN [11] and GraphCSPN [38] (but 1.2× and 1.5× faster in inference than them, respectively). The results are commendable because RDFC-GAN is not designed for the sparse setting.…”
Section: Training and Evaluation Settingsmentioning
confidence: 88%
See 2 more Smart Citations
“…• We observed a similar trend in the setting of T * ⇒ T that RDFC-GAN obtained the best on four of all five metrics. In terms of RMSE, RDFC-GAN without any iteration processing was only lower than NLSPN [11] and GraphCSPN [38] (but 1.2× and 1.5× faster in inference than them, respectively). The results are commendable because RDFC-GAN is not designed for the sparse setting.…”
Section: Training and Evaluation Settingsmentioning
confidence: 88%
“…• Setting A (R ⇒ T ): To be the most in line with the real scenario of indoor depth completion, we input a raw depth map without downsampling during testing. At the training time, we used the pseudo depth maps (R ps ) as the input and supervised with the raw depth image, to train NLSPN [14], GraphCSPN [38], RDF-GAN [1], and the proposed RDFC-GAN. Meanwhile, we compared with several baselines [12], [35]- [37] that were trained in the synthetic semi-dense sensor data [37].…”
Section: Training and Evaluation Settingsmentioning
confidence: 99%
See 1 more Smart Citation