2016 Fourth International Conference on 3D Vision (3DV) 2016
DOI: 10.1109/3dv.2016.18
|View full text |Cite
|
Sign up to set email alerts
|

SceneNN: A Scene Meshes Dataset with aNNotations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
156
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 273 publications
(156 citation statements)
references
References 38 publications
0
156
0
Order By: Relevance
“…To compare against previous work in [5], we evaluate the 3D segmentation accuracy of the proposed dense object-level semantic mapping framework on real-world indoor scans from the SceneNN [8] dataset, improving over the baseline for most of the evaluated scenes. A sample inventory of object models discovered in these scenes is shown to contain recognized, semantically annotated elements, as well as newly discovered, previously unseen objects.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To compare against previous work in [5], we evaluate the 3D segmentation accuracy of the proposed dense object-level semantic mapping framework on real-world indoor scans from the SceneNN [8] dataset, improving over the baseline for most of the evaluated scenes. A sample inventory of object models discovered in these scenes is shown to contain recognized, semantically annotated elements, as well as newly discovered, previously unseen objects.…”
Section: Methodsmentioning
confidence: 99%
“…Note that different colors in Figure (a) and Figure (b) represent the different instances, and that a same instance in the prediction and ground truth is not necessarily of the same color. Progressive mapping of sequence 231 from the SceneNN [8] dataset is shown in the accompanying video available at http://youtu.be/Jvl42VJmYxg.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We do strong supervision of the descriptor network on datasets with ground truth poses, i.e., SceneNN [13] and "3D reconstruction dataset" [38]. The loss function for strong supervision defined on a pair of overlapping point clouds X and X with ground truth poses G ∈ SE(3) and G ∈ SE(3) is given by:…”
Section: D2 Strong Supervisionmentioning
confidence: 99%
“…To evaluate our system, we perform experiments on sequence 66 of the SceneNN dataset [34], where multiple identical objects are present, shown in Fig. 5, and indoor datasets collected with a Tango phone, and released with this paper 2 , see Fig.…”
Section: Methodsmentioning
confidence: 99%