2022
DOI: 10.1007/978-3-031-19812-0_13
|View full text |Cite
|
Sign up to set email alerts
|

RigNet: Repetitive Image Guided Network for Depth Completion

Abstract: Depth completion aims to recover dense depth maps from sparse ones, where color images are often used to facilitate this task. Recent depth methods primarily focus on image guided learning frameworks. However, blurry guidance in the image and unclear structure in the depth still impede their performance. To tackle these challenges, we explore an efficient repetitive design in our image guided network to gradually and sufficiently recover depth values. Specifically, the efficient repetition is embodied in both … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(18 citation statements)
references
References 77 publications
(191 reference statements)
0
18
0
Order By: Relevance
“…SfM-Learner (Zhou et al 2017) is a seminal work that proposed to jointly predict scene depth and relative camera poses. Follow-up works enhanced SfM-Learner by decomposing depth scale (Wang et al 2021b;Yan et al 2023), introducing powerful neural networks (Guizilini et al 2020;Lyu et al 2021;Guizilini et al 2022), and applying iterative refinement (Bangunharcana, Magd, and Kim 2023). Furthermore, MonoDepth2 (Godard et al 2019) proposed a minimum reprojection loss to handle occlusions, and some works addressed the dynamic object problem by compensating and masking pixels within dynamic areas using optical flow (Zou, Luo, and Huang 2018;Ranjan et al 2019) and pretrained segmentation models (Gordon et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…SfM-Learner (Zhou et al 2017) is a seminal work that proposed to jointly predict scene depth and relative camera poses. Follow-up works enhanced SfM-Learner by decomposing depth scale (Wang et al 2021b;Yan et al 2023), introducing powerful neural networks (Guizilini et al 2020;Lyu et al 2021;Guizilini et al 2022), and applying iterative refinement (Bangunharcana, Magd, and Kim 2023). Furthermore, MonoDepth2 (Godard et al 2019) proposed a minimum reprojection loss to handle occlusions, and some works addressed the dynamic object problem by compensating and masking pixels within dynamic areas using optical flow (Zou, Luo, and Huang 2018;Ranjan et al 2019) and pretrained segmentation models (Gordon et al 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Deep neural networks greatly promote the development of depth completion task. At present, the related works of depth completion can be roughly divided into three main categories: single-branch-based methods [4,18,19,[35][36][37][38], two-branch-based methods [16,17,20,22,[39][40][41][42][43] and multiple-branch-based methods [3,25,44]. The single-branch-based methods use only one encoder-decoder network to complete depth maps.…”
Section: Depth Completionmentioning
confidence: 99%
“…For example, Tang et al [16] used two independent encoder-decoder network to extract the RGB and depth features respectively, and then designed a guided convolution module to fuse the decoder features from RGB branch with the encoder features from depth branch. Besides, some multiple-branch-based works [3,44] employed three hourglass networks to extract more useful information. Yan et al [44] indicated that the increasing number of hourglass networks could improve the depth completion performance.…”
Section: Depth Completionmentioning
confidence: 99%
See 1 more Smart Citation
“…Ma and Karaman [10] used a CNN encoder-decoder to predict the full-resolution depth image from a set of depth samples and RGB images. On this basis, several methods [11], [12], [19], [31], [32] incorporating additional representations or auxiliary outputs have been proposed. Qiu et al [11] produced dense depth using the surface normal as the intermediate representation.…”
Section: Introductionmentioning
confidence: 99%