2019
DOI: 10.1007/978-3-030-11015-4_48
|View full text |Cite
|
Sign up to set email alerts
|

The Second Workshop on 3D Reconstruction Meets Semantics: Challenge Results Discussion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(14 citation statements)
references
References 14 publications
0
14
0
Order By: Relevance
“…Aiming at showing that our approach generalizes well to various types of environments, we run a series of experiments on three completely different datasets for 3D semantic segmentation containing image sequences: ScanNet [28], a RGB-D video dataset established as one of the standard 3D segmentation benchmarks; the 3DRMS Challenge Dataset [29], a challenging outdoor dataset, featuring image sequences captured from virtual and real drives through semantically-rich garden scenes; and the Aerial Depth Dataset [1], a photorealistic aerial dataset that exhibits more challenging scenarios than the established indoor and car driving datasets. In all of them, ground truth 2D semantic annotations and camera poses, as well as semantically annotated 3D point clouds depicting the area of each sequence are provided for evaluation.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Aiming at showing that our approach generalizes well to various types of environments, we run a series of experiments on three completely different datasets for 3D semantic segmentation containing image sequences: ScanNet [28], a RGB-D video dataset established as one of the standard 3D segmentation benchmarks; the 3DRMS Challenge Dataset [29], a challenging outdoor dataset, featuring image sequences captured from virtual and real drives through semantically-rich garden scenes; and the Aerial Depth Dataset [1], a photorealistic aerial dataset that exhibits more challenging scenarios than the established indoor and car driving datasets. In all of them, ground truth 2D semantic annotations and camera poses, as well as semantically annotated 3D point clouds depicting the area of each sequence are provided for evaluation.…”
Section: Methodsmentioning
confidence: 99%
“…To evaluate the performance of our approach on other challenging, but completely different environments, we also run experiments on the real-world validation sequence of the 2018 3DRMS Challenge [29] as well as on one sequence extracted from the Aerial Depth Dataset [1]. Here, given a set of RGB images with a known camera pose, the goal is to label the provided 3D point cloud of the scene.…”
Section: B Evaluation On Outdoor Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…There is no reference value or method for the KITTI scene reconstruction task regarding 3D point clouds. Therefore, we propose to measure performances using the completeness and accuracy criteria from [22], and also propose the relative accuracy. All 3 measures are defined as follows:…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…
This paper is a technical report about our submission for the ECCV 2018 3DRMS Workshop Challenge on Semantic 3D Reconstruction [1]. In this paper, we address 3D semantic reconstruction for autonomous navigation using co-learning of depth map and semantic segmentation.
…”
mentioning
confidence: 99%