2013 IEEE International Conference on Computer Vision 2013
DOI: 10.1109/iccv.2013.235
|View full text |Cite
|
Sign up to set email alerts
|

3DNN: Viewpoint Invariant 3D Geometry Matching for Scene Understanding

Abstract: We present a new algorithm 3DNN (3D NearestNeighbor)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(23 citation statements)
references
References 35 publications
0
23
0
Order By: Relevance
“…Our model can be understood as a novel way to extend this framework to 3D. There are also works on using CAD models for training [31][32][33][34][35][36], but they are not for depth images.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…Our model can be understood as a novel way to extend this framework to 3D. There are also works on using CAD models for training [31][32][33][34][35][36], but they are not for depth images.…”
Section: Related Work and Discussionmentioning
confidence: 99%
“…For every placement of a 3D object (which we call swapping) we calculate its geometric likelihood -that is the similarity between the image and the projected 3D model. This geometric likelihood is evaluated using the method of [22] where various geometrically meaningful image features, such as surface normal and clutter estimates, are combined in a learned linear model to output a single number as a geometric similarity score. In other words, the geometric similarity score is w T L, where each row of L is the output of a geometrically meaningful image feature 1 , and w is a learned weight vector.…”
Section: Approachmentioning
confidence: 99%
“…Secondly, we further the work of one of the most recent and state of the art works in the area of scene understanding, of Satkin et al [22], by creating a flexible yet robust framework for the incorporation of both semantic and appearance cues for the purpose of achieving better holistic 3D scene understanding.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite the computational complexity involved in such tasks, new computer vision developments potentially can help facing such challenging tasks at manageable costs (Google, 2016). For example, computer vision systems could help improving the quality of the images taken in the presence of reflecting or occluding elements, such as windows and fences (Xue et al, 2015), compensating the viewpoint from which the image was captured in three-dimensional visual data processing (Satkin and Hebert, 2013), localizing the user in the surrounding environment, so that indoor and outdoor scenes can be represented in referenced spaces (Bettadapura et al, 2015), or yet capturing automatically the relationships among visual elements in real-world scenarios (Choi et al, 2015).…”
Section: Introductionmentioning
confidence: 99%