2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW) 2023
DOI: 10.1109/wacvw58289.2023.00069
|View full text |Cite
|
Sign up to set email alerts
|

The Monocular Depth Estimation Challenge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
2
2
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 57 publications
0
7
0
Order By: Relevance
“…This represents a relative improvement of 27.62% in F-Score (13.72% -Baseline) and 18% in AbsRel (29.66% -OPDAI) w.r.t. the first edition of the challenge [77]. The top-performing self-supervised method was Team imec-IDLab-UAntwerp, which leveraged improved pretrained encoders and deformable decoder convolutions.…”
Section: Quantitative Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…This represents a relative improvement of 27.62% in F-Score (13.72% -Baseline) and 18% in AbsRel (29.66% -OPDAI) w.r.t. the first edition of the challenge [77]. The top-performing self-supervised method was Team imec-IDLab-UAntwerp, which leveraged improved pretrained encoders and deformable decoder convolutions.…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…The Monocular Depth Estimation Challenge series [77]-the focus of this paper-is based on the MonoDepth Benchmark [78], which provided fair evaluations and implementations of recent SotA self-supervised MDE algorithms. Our focus lies on zero-shot generalization to a wide diversity of scenes.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The estimation of depth from a single image is an illposed problem, and numerous methods have been proposed to address it [26,27,28,29]. Initially, supervised learning dominated the utilized training approaches, with Egin et al [2] proposing comprehensive metrics.…”
Section: Self-supervised Monocular Depth Estimationmentioning
confidence: 99%