2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9340802
|View full text |Cite
|
Sign up to set email alerts
|

Toward Hierarchical Self-Supervised Monocular Absolute Depth Estimation for Autonomous Driving Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
47
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 74 publications
(47 citation statements)
references
References 23 publications
0
47
0
Order By: Relevance
“…Table 7 compares our method with other methods (J. Jiang et al., 2019; Liu et al., 2020; Xue et al. 2020), and our method has the lowest RMSE value. The REL value is slightly greater than the value in Liu et al.…”
Section: Experiments Of 3doimmentioning
confidence: 89%
“…Table 7 compares our method with other methods (J. Jiang et al., 2019; Liu et al., 2020; Xue et al. 2020), and our method has the lowest RMSE value. The REL value is slightly greater than the value in Liu et al.…”
Section: Experiments Of 3doimmentioning
confidence: 89%
“…Hence, the depth and ego-motion estimates from these methods suffer from scale-inconsistency along with the global scale-ambiguity present in monocular vision. Therefore, ground truth LiDAR depth maps [4] or camera height [25] are used during inference to recover perimage scale.…”
Section: Related Workmentioning
confidence: 99%
“…This unsatisfied performance may come from two reasons: the limited modeling power of the model and the limited supervision of loss functions. Many existing self-supervised depth estimation networks [16]- [18], [23], [24] adopt residual networks [25] based encoder-decoder architecture [26]. Such a depth network shows a powerful capacity for capturing local information and representing hierarchical abstract features, but is insufficient to distinguish important details from interfering information.…”
Section: Introductionmentioning
confidence: 99%