2021
DOI: 10.1016/j.patcog.2020.107578
|View full text |Cite
|
Sign up to set email alerts
|

DPNet: Detail-preserving network for high quality monocular depth estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(28 citation statements)
references
References 16 publications
0
28
0
Order By: Relevance
“…The depth results state that our approach is significantly superior to Roxas and Oishi, 15 Graber et al, 17 and Hu and Chen. 29 Moreover, contrast to some current prominent CNN methods, [20][21][22][23][24][25][26][27][28] our method uses stereo information and given ground truth pose, the results in Table 2 show that our approach can reach the performance of state-of-the-art methods. The estimated visualize depth results are presented in Figure 2.…”
Section: Kitti Datasetmentioning
confidence: 90%
“…The depth results state that our approach is significantly superior to Roxas and Oishi, 15 Graber et al, 17 and Hu and Chen. 29 Moreover, contrast to some current prominent CNN methods, [20][21][22][23][24][25][26][27][28] our method uses stereo information and given ground truth pose, the results in Table 2 show that our approach can reach the performance of state-of-the-art methods. The estimated visualize depth results are presented in Figure 2.…”
Section: Kitti Datasetmentioning
confidence: 90%
“…To evaluate the performance of our proposed method, we conduct experiments in various settings on two benchmark datasets, i.e., the KITTI dataset for outdoor scenario and the NYU Depth V2 dataset for indoor scenario. We provide both quantitative results and qualitative results for our method, as well as compare results with other leading monocular depth estimation methods, i.e., [ 6 , 7 , 17 , 20 , 21 , 28 , 30 , 44 , 58 , 64 , 65 , 66 , 67 ].…”
Section: Methodsmentioning
confidence: 99%
“…The quantitative results on the KITTI Eigen test split are shown in Table 2 , where [ 20 , 28 , 30 , 44 , 69 , 71 , 72 ] adopted the same split strategy as our method. It is worth noting that Lee et al [ 20 ] devised a pyramid-like decoder with an encoder the same as ours, ResNeXt101.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Specifically, encoded features were fed into different streams for decoding depth residuals. Ye et al [29] proposed DPNet for high-quality monocular depth estimation. They designed an efficient non-local spatial attention module and a spatial branch (SB) to preserve spatial information.…”
Section: Related Workmentioning
confidence: 99%