2020
DOI: 10.1109/access.2020.3032582
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Monocular Training Method for Depth Estimation Using Statistical Masks

Abstract: Recently, unsupervised monocular training methods based on convolutional neural networks have already shown surprisingly progress in improving the accuracy of depth estimation. However, the performance of these methods suffers deeply from problematic pixels such as occluded pixels, low-texture pixels, and so on. In this paper, we introduce a method to a mask by the statistic of error maps for segmenting the problematic pixels. Different from the conventional methods which use additional segmentation networks t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 19 publications
0
2
0
Order By: Relevance
“…Lighting conditions. Previous study 20,34 show that during the depth estimation model training, the low-texture areas caused by insufficient lighting or overexposure will produce problem pixels in depth estimation.…”
Section: Factors That Affect the Train Of Mdementioning
confidence: 99%
“…Lighting conditions. Previous study 20,34 show that during the depth estimation model training, the low-texture areas caused by insufficient lighting or overexposure will produce problem pixels in depth estimation.…”
Section: Factors That Affect the Train Of Mdementioning
confidence: 99%
“…PRESENCE is an online and adaptive data reweighting method that uses self-influence scores to weigh samples in a training batch. We note that during pre-training, the training loss decreases exponentially in the initial steps, with a minimal decrease in loss values in the subsequent stages (Yang et al, 2021). Furthermore, well-trained models can identify noisy samples better when used to calculate SI scores, as compared to models in very early stages of training (Pruthi et al, 2020).…”
Section: Introductionmentioning
confidence: 96%