2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01253
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Monocular Depth Estimation with Lightweight 3D Point Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…Lam et al [ 29 ] adopted a fully convolutional framework. After inputting an RGB image and sparse 3D point clouds to form a sparse depth map and taking the 3D point clouds as depth constraints onto the images, they created an RGB-D image.…”
Section: Related Workmentioning
confidence: 99%
“…Lam et al [ 29 ] adopted a fully convolutional framework. After inputting an RGB image and sparse 3D point clouds to form a sparse depth map and taking the 3D point clouds as depth constraints onto the images, they created an RGB-D image.…”
Section: Related Workmentioning
confidence: 99%
“…In a more general 3D reconstruction pipeline, well-triangulated 2D features and corresponding 3D keypoints are a by-product of most SFM algorithms [29]. The PatchMatch framework is flexible enough to support any kind of initial solution, without requiring to design complex neural architectures to extract representations from sparse input data [30].…”
Section: B Keypoint-based Initializationmentioning
confidence: 99%
“…Monocular Depth Estimation: Monocular depth estimation has been recently shifted to improving neural network architectures and optimizing methods [59,82,85,35,10,48,65,87], integrating hierarchical features [85,57,65], leveraging camera motion between pairs of frames [97,51,41,72], taking advantage of planner guidance [57,47] and 3D geometric constraints [62,21,61,46]. More recently, audio [81,38,69] has been introduced to help for estimating depth.…”
Section: Related Workmentioning
confidence: 99%