2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01575
|View full text |Cite
|
Sign up to set email alerts
|

Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 50 publications
(16 citation statements)
references
References 39 publications
0
16
0
Order By: Relevance
“…Meanwhile, although combining Monodepth2 [18] and CycleGAN [17] could transform the nighttime images to 'fake' daytime images and reduce the domain shift between train and test images, the performances are also limited due to the loss of CycleGAN itself. ADFA [29] and RNW-Net [30], which is specialized to estimate depth of nighttime images, could reduce the domain shift between day and night images at the feature level, but the performance is limited by daytime results. ADDS-DepthNet [20] also use day-night image pair as input and could improve the depth estimation results of both the daytime and nighttime images to a certain extent, its performance can still be improved.…”
Section: Quantitative Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Meanwhile, although combining Monodepth2 [18] and CycleGAN [17] could transform the nighttime images to 'fake' daytime images and reduce the domain shift between train and test images, the performances are also limited due to the loss of CycleGAN itself. ADFA [29] and RNW-Net [30], which is specialized to estimate depth of nighttime images, could reduce the domain shift between day and night images at the feature level, but the performance is limited by daytime results. ADDS-DepthNet [20] also use day-night image pair as input and could improve the depth estimation results of both the daytime and nighttime images to a certain extent, its performance can still be improved.…”
Section: Quantitative Resultsmentioning
confidence: 99%
“…[29] adapts a network trained on daytime images to work for nighttime images, aiming to transfer knowledge from daytime images to nighttime images. [30] propose Priors-Based Regularization to learn distribution knowledge from unpaired depth maps and Mapping-Consistent Image Enhancement module to enhance image visibility and contrast. [20] propose a domain-separated framework which partition the information of day-night image pairs into two complementary sub-spaces and relieve the influence of disturbing terms for all-day depth estimation.…”
Section: Nighttime Depth Estimationmentioning
confidence: 99%
“…In addition, depth estimation also needs to consider some special scenarios, such as dark, foggy, rainy, and snowy weather. These scenes are very challenging scenes, and some recent studies have focused on these problems, such as Wang, K et al [63] research on depth estimation in night environments. With the widespread application of depth estimation, new research needs to consider special scenarios to make depth estimation more general.…”
Section: Discussionmentioning
confidence: 99%
“…Watson et al [26] drew on the cost-volume method in multi-view depth estimation and propose a novel consistency loss to handle moving objects. Wang et al [27] proposed a statistical-based masking strategy, which uses dynamic statistics to adjust the number of pixels removed in the low-texture area.…”
Section: Related Workmentioning
confidence: 99%