2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01132
|View full text |Cite
|
Sign up to set email alerts
|

From Depth What Can You See? Depth Completion via Auxiliary Image Reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(30 citation statements)
references
References 31 publications
0
30
0
Order By: Relevance
“…RMSE MAE iRMSE iMAE reference mm mm 1/km 1/km CSPN [4] 1019.64 279.46 2.93 1.15 ECCV18 From [31] 901.43 292.36 4.92 1.35 CVPR20 TWISE [18] 840.20 195.58 2.08 0.82 CVPR21 NConv [9] 829.98 233.26 2.60 1.03 PAMI20 S2D [32] 814.73 249.95 2.80 1.21 ICRA19 PwP [46] 777.05 235.17 2.42 1.13 ICCV19 Fusion [44] 772.87 215.02 2.19 0.93 MVA19 DSPN [47] 766.74 220.36 2.47 1.03 ICIP20 DLiDAR [37] 758.38 226.50 2.56 1.15 CVPR19 FuseNet [2] 752.88 221.19 2.34 1.14 ICCV19 ACMNet [54] 744.91 206.09 2.08 0.90 TIP21 CSPN++ [3] 743.69 209.28 2.07 0.90 AAAI20 NLSPN [35] 741.68 199.59 1.99 0.84 ECCV20 GuideNet [42] 736.24 218.83 2.25 0.99 TIP20 FCFRNet [28] 735.81 217.15 2.20 0.98 AAAI21 PENet [16] 730.08 210.55 2.17 0.94 ICRA21 RigNet (ours) 713.44 204.55 2.16 0.92 -Table 2. Quantitative comparisons with state-of-the-art methods on KITTI depth completion benchmark.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…RMSE MAE iRMSE iMAE reference mm mm 1/km 1/km CSPN [4] 1019.64 279.46 2.93 1.15 ECCV18 From [31] 901.43 292.36 4.92 1.35 CVPR20 TWISE [18] 840.20 195.58 2.08 0.82 CVPR21 NConv [9] 829.98 233.26 2.60 1.03 PAMI20 S2D [32] 814.73 249.95 2.80 1.21 ICRA19 PwP [46] 777.05 235.17 2.42 1.13 ICCV19 Fusion [44] 772.87 215.02 2.19 0.93 MVA19 DSPN [47] 766.74 220.36 2.47 1.03 ICIP20 DLiDAR [37] 758.38 226.50 2.56 1.15 CVPR19 FuseNet [2] 752.88 221.19 2.34 1.14 ICCV19 ACMNet [54] 744.91 206.09 2.08 0.90 TIP21 CSPN++ [3] 743.69 209.28 2.07 0.90 AAAI20 NLSPN [35] 741.68 199.59 1.99 0.84 ECCV20 GuideNet [42] 736.24 218.83 2.25 0.99 TIP20 FCFRNet [28] 735.81 217.15 2.20 0.98 AAAI21 PENet [16] 730.08 210.55 2.17 0.94 ICRA21 RigNet (ours) 713.44 204.55 2.16 0.92 -Table 2. Quantitative comparisons with state-of-the-art methods on KITTI depth completion benchmark.…”
Section: Methodsmentioning
confidence: 99%
“…On this basis, the early depth completion methods [43,23,5,21,9,32,45] mainly input depth maps without the corresponding color images. Further, Lu et al [30] take sparse depth as the only input with color images being auxiliary supervision. However, single-modal based depth completion methods are limited without other information as reference.…”
Section: Related Workmentioning
confidence: 99%
“…Our method even can outperform some early-stage supervised solutions. Recent developed supervised models are prone to use a large network with 10 million or even 100 million parameters [41], [25], [42], thus have better performance. Besides, we also list two state-of-the-art supervised methods in Table III.…”
Section: The Performance Of Surface Geometry Modelmentioning
confidence: 99%
“…Depth completion is to predict a pixellevel dense depth from the given sparse depth. Existing depth completion algorithms are mainly divided into depth-only methods and multiple-input methods [24]. Depth-only methods may provide the corresponding dense depth image by only inputting the sparse depth, which is a challenging problem due to the lack of rich semantic information.…”
Section: Related Workmentioning
confidence: 99%
“…As well, we tackle the problem that vegetation and buildings are permuted in generated semantic images, which is widely observed in previous works [14], [23], by introducing multi-scale spatial pooling blocks [16] and the structural similarity reconstruction loss [17]. In addition, although semantic cues are important for depth completion [24], existing methods do not consider input semantic labels. We further introduce semantic information to improve the accuracy of depth completion and aim to achieve competitive results.…”
Section: Introductionmentioning
confidence: 99%