2021
DOI: 10.48550/arxiv.2111.09692
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SUB-Depth: Self-distillation and Uncertainty Boosting Self-supervised Monocular Depth Estimation

Abstract: We propose SUB-Depth, a universal multi-task training framework for self-supervised monocular depth estimation (SDE). Depth models trained with SUB-Depth outperform the same models trained in a standard single-task SDE framework. By introducing an additional self-distillation task into a standard SDE training framework, SUB-Depth trains a depth network, not only to predict the depth map for an image reconstruction task, but also to distill knowledge from a trained teacher network with unlabelled data. To take … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 39 publications
0
1
0
Order By: Relevance
“…Pan et al [32] designed a student encoder to extract features from two datasets of indoor and outdoor scenes and introduced dissimilarity loss to separate the feature spaces of different scenes. Weighted multi-task learning [33] was used to learn to minimize the cost of training labels, using self-distillation methods to assist in the training of multi-task learning. Han et al [34] designed a decoder based on the attention block to enhance the representation of details in the feature map in ensuring global context and used selfdistillation's single-scale photometric loss to improve the performance of the student model.…”
Section: Self-distillation Monocular Depth Estimationmentioning
confidence: 99%
“…Pan et al [32] designed a student encoder to extract features from two datasets of indoor and outdoor scenes and introduced dissimilarity loss to separate the feature spaces of different scenes. Weighted multi-task learning [33] was used to learn to minimize the cost of training labels, using self-distillation methods to assist in the training of multi-task learning. Han et al [34] designed a decoder based on the attention block to enhance the representation of details in the feature map in ensuring global context and used selfdistillation's single-scale photometric loss to improve the performance of the student model.…”
Section: Self-distillation Monocular Depth Estimationmentioning
confidence: 99%