2020
DOI: 10.3390/s21010015
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Single Image Depth Perception in the Wild with Handheld Devices

Abstract: Depth perception is paramount for tackling real-world problems, ranging from autonomous driving to consumer applications. For the latter, depth estimation from a single image would represent the most versatile solution since a standard camera is available on almost any handheld device. Nonetheless, two main issues limit the practical deployment of monocular depth estimation methods on such devices: (i) the low reliability when deployed in the wild and (ii) the resources needed to achieve real-time performance,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(31 citation statements)
references
References 43 publications
0
27
0
Order By: Relevance
“…One simple alternative is employing lightweight architectures such as MobileNet [24,25,49,55], GhostNet [21], and FBNet [54]. One popular approach is utilizing network compression techniques, including quantization [22], network pruning [58], and knowledge distillation [60,1]. Other methods employing well-known pyramid networks or dynamic optimization schemes.…”
Section: Related Workmentioning
confidence: 99%
“…One simple alternative is employing lightweight architectures such as MobileNet [24,25,49,55], GhostNet [21], and FBNet [54]. One popular approach is utilizing network compression techniques, including quantization [22], network pruning [58], and knowledge distillation [60,1]. Other methods employing well-known pyramid networks or dynamic optimization schemes.…”
Section: Related Workmentioning
confidence: 99%
“…To reduce inference times, there exist two primary avenues outside of increasing compute resources: model size reduction and platform-specific optimizations. To reduce the size of our depth estimation model, we leverage the knowledge distillation techniques proposed in [8]. Knowledge distillation is the process of training a smaller, student network to learn the function represented by a larger, teacher network via a semi-supervised learning scheme.…”
Section: A Depth Estimationmentioning
confidence: 99%
“…In our knowledge distillation pipeline, we used the MiDaS network developed in [6] as the teacher network. For the student architecture, we employed the same architecture from [8]. The student network output was then regressed to match the output of the MiDaS network on our curated dataset.…”
Section: A Depth Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Describing a global proposal, based on MIL, that demonstrates a high generalization capability for anomaly detection in unseen real scenarios in the Wild, understanding "in the Wild" as in non-prepared daily scenarios, instead of in lab-based ones [3,4].…”
Section: Introductionmentioning
confidence: 99%