2020
DOI: 10.1007/978-3-030-58604-1_27
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Monocular Depth Estimation for Night-Time Images Using Adversarial Domain Feature Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(40 citation statements)
references
References 39 publications
0
40
0
Order By: Relevance
“…In the next step, IMUTube calculates the camera ego-motion to account for 3D human pose location and orientation in accordance with the camera movements. IMUTube does so by estimating the background depth maps from each scene and lifting them to 3D point cloud models [62,63]. Based on subsequent 3D point clouds, IMUTube calculates the camera's ego-motion using the Iterative Closest Points (ICP) method [64].…”
Section: D Human Motion Tracking and Virtual Imu Data Extractionmentioning
confidence: 99%
“…In the next step, IMUTube calculates the camera ego-motion to account for 3D human pose location and orientation in accordance with the camera movements. IMUTube does so by estimating the background depth maps from each scene and lifting them to 3D point cloud models [62,63]. Based on subsequent 3D point clouds, IMUTube calculates the camera's ego-motion using the Iterative Closest Points (ICP) method [64].…”
Section: D Human Motion Tracking and Virtual Imu Data Extractionmentioning
confidence: 99%
“…In the past two years, more and more scholars have used adversarial discriminant learning methods to predict depth information with better results. [41] train an encoder that extracts imperceptible nighttime features that distinguish daytime images by an adversarial discriminative learning method based on PatchGAN, and plug a pre-trained daytime depth decoder into its back end to achieve unsupervised nighttime monocular depth estimation. S3Net [42] considers the geometric structure across space and time in monocular video frames in an adversarial network framework, i.e., using geometric, temporal, and semantic constraints simultaneously for depth prediction.…”
Section: B Self-supervised Monocular Depth Estimationmentioning
confidence: 99%
“…To estimate all day time depth, [19] utilizes a thermal imaging camera sensor to reduce the influence of low-visibility in the night-time, while [26] adds LiDAR to provide additional information in estimating depth maps at night-time. Meanwhile, using generate adversarial network, [33] and [34] propose effective strategies for depth estimation of night-time images. [33] utilizes a translation network with light effects and uninformative regions that can render realistic night stereo images from day stereo images, and vice versa.…”
Section: Night-time Depth Estimationmentioning
confidence: 99%