2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00037
|View full text |Cite
|
Sign up to set email alerts
|

GeoNet: Geometric Neural Network for Joint Depth and Surface Normal Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
288
1
9

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 363 publications
(298 citation statements)
references
References 20 publications
0
288
1
9
Order By: Relevance
“…The loss function is the mean angular difference between the ground-truth and the regressed normal. [30] (from top to bottom). Right: sectional view of these results.…”
Section: Normal Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…The loss function is the mean angular difference between the ground-truth and the regressed normal. [30] (from top to bottom). Right: sectional view of these results.…”
Section: Normal Networkmentioning
confidence: 99%
“…The above shape refinement is iterated for 5 times in our network to simulate the iterative solution of the original energy equation in [24]. Figure 3 compares our method with the 'Kernel Regression' layer [30] on a toy example, which is also designed to fuse the surface normal and depth. Figure 4 shows a comparison with the work [24] on real data and our method also produces more convincing result.…”
Section: Depth Refinementmentioning
confidence: 99%
“…For example, a continuous conditional random field (CRF) [28] is used for depth prediction, which takes pair-wise information into account. Other high-order geometric relations [9,31] are also exploited, such as designing a gravity constraint for local regions [9] or incorporating the depth-to-surface-normal mutual transformation inside the optimization pipeline [31]. Note that, for the above methods, almost all the geometric constraints are 'local' in the sense that they are extracted from a small neighborhood in either 2D or 3D.…”
Section: Introductionmentioning
confidence: 99%
“…Each scene contains aligned RGB and depth images, acquired from a Microsoft Kinect sensor. Following previous works on single-image depth estimation [Chen et al 2016;Qi et al 2018;Zoran et al 2015], we use the standard training-testing split and evaluate our method on the 654 imagedepth pairs from the testing set.…”
Section: Depth Prediction Qualitymentioning
confidence: 99%