2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00347
|View full text |Cite
|
Sign up to set email alerts
|

Dense Depth Posterior (DDP) From Single Image and Sparse Range

Abstract: We present a deep learning system to infer the posterior distribution of a dense depth map associated with an image, by exploiting sparse range measurements, for instance from a lidar. While the lidar may provide a depth value for a small percentage of the pixels, we exploit regularities reflected in the training set to complete the map so as to have a probability over depth for each pixel in the image. We exploit a Conditional Prior Network, that allows associating a probability to each depth value given an i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
143
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 123 publications
(143 citation statements)
references
References 44 publications
0
143
0
Order By: Relevance
“…Compared to [14] and [28], our VGG11 model has a 76.1% and 61.5% reduction in the encoder parameters and 65.1% and 48.4% overall, respectively. Our VGG8 model has a 89.9% and 83.9% reduction in the encoder and 80% and 66% overall compared to that of [14] and [28], respectively. Despite having fewer parameters, our method outperforms that of [14,28].…”
Section: Network Architecturementioning
confidence: 93%
See 4 more Smart Citations
“…Compared to [14] and [28], our VGG11 model has a 76.1% and 61.5% reduction in the encoder parameters and 65.1% and 48.4% overall, respectively. Our VGG8 model has a 89.9% and 83.9% reduction in the encoder and 80% and 66% overall compared to that of [14] and [28], respectively. Despite having fewer parameters, our method outperforms that of [14,28].…”
Section: Network Architecturementioning
confidence: 93%
“…This is in contrast to other unsupervised methods [14] (who follows early fusion and concatenates features from the two branches after the first convolution) and [28] (late fusion) -both of whom use ResNet34 encoders with ≈ 23.8M and ≈ 14.8M parameters, respectively. Both [14,28] employ the same decoder with ≈ 4M parameterstotaling to ≈ 27.8M and ≈ 18.8M parameters, respectively. Compared to [14] and [28], our VGG11 model has a 76.1% and 61.5% reduction in the encoder parameters and 65.1% and 48.4% overall, respectively.…”
Section: Network Architecturementioning
confidence: 98%
See 3 more Smart Citations