2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018
DOI: 10.1109/itsc.2018.8569765
|View full text |Cite
|
Sign up to set email alerts
|

The TUBS Road User Dataset: A New LiDAR Dataset and its Application to CNN-based Road User Classification for Automated Vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…The international standards for 3D modeling are set by the OGC CityGML model [14], which defines the principles for mapping physical topographic objects and physical volumetric objects [14,18]. Physical volumetric models are combined with semantic and functional data to develop detailed models of the urban systems [18,22], road networks and other systems, including in real time [23,24].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The international standards for 3D modeling are set by the OGC CityGML model [14], which defines the principles for mapping physical topographic objects and physical volumetric objects [14,18]. Physical volumetric models are combined with semantic and functional data to develop detailed models of the urban systems [18,22], road networks and other systems, including in real time [23,24].…”
Section: Literature Reviewmentioning
confidence: 99%
“…(Dewan et al, 2016) use a Bayesian approach and motion cues to automatically detect and track objects. (Plachetka et al, 2018) track manually annotated 3D bounding boxes and corresponding polygons (for RGB segmentation). (Chang et al, 2019) accumulate manually selected points over time to automatically infer fixed size 3D bounding boxes.…”
Section: Related Workmentioning
confidence: 99%
“…Algorithmically guided generation of pixel-level instance segmentation masks in both 2D and 3D, with 2D bounding boxes as a by-product, speeding up annotation time by a factor of up to 8.55. (Marion et al, 2018) mesh + reconstruction 6DoF RGB, D ( ) (Suchi et al, 2019) incremental scene building 2D D -- (Wong et al, 2015) learning 4DoF RGB - (Monica et al, 2017) graph segmentation -D - (Zimmer et al, 2019) interpolation 4DoF -- (Wang et al, 2019) tracking + clustering 4DoF -- (Arief et al, 2020) tracking + clustering 4DoF -- (Plachetka et al, 2018) tracking + optimization 4DoF, 2D RGB, D (Lee et al, 2018) learning 4DoF D - (Huang et al, 2020) 3D-2D projection -RGB, D - (Yan et al, 2020) clustering 3DoF -- (Hodaň et al, 2017) mesh + reconstruction 6DoF, 2D - (Xiang et al, 2018) mesh + reference frame 6DoF D (Grenzdörffer et al, 2020) mesh + reference frame 6DoF, 2D D (Xie et al, 2016) label transfer -RGB (Patil et al, 2019) interpolation 4DoF - (Dewan et al, 2016) Bayesian approach 3DoF D (Chang et al, 2019) accumulation over time 6DoF -…”
Section: Introductionmentioning
confidence: 99%
“…• Usage of an adaptive ground surface estimation, which serves as the basis for a surface-relative environment representation, • a multi-stage pointcloud preprocessing pipeline, including pointcloud compression and clustering, • a motion classification of detected clusters, and • a hybrid representation for movable and stationary environment elements, consisting of a multi-layer semantic grid map and an IMM-EKF-based object tracking with explicit modeling of partial occlusions. We present the latest additions to our previous work (e. g. [3]- [10]) and provide evaluation results of the perception performance using a subset of the TUBS Road User Dataset [11]. We extend common evaluation strategies by movable elements stationary elements traversable regions Figure 1.…”
Section: A Contribution Of This Papermentioning
confidence: 99%
“…Despite the fact that using such a publicly available dataset would have allowed the comparison with other approaches, both expose some drawbacks regarding their usability for evaluation of the presented perception system (i. e., unlabeled regions or ranges [36], no per-scan annotated data [75]). Thus, we use a preliminary version of the TUBS Road User Dataset [11] for evaluating our approach. This dataset contains labeled objects within the entire field of view of the sensor and for each LiDAR scan, recorded along the project route of the project Stadtpilot.…”
Section: Definition Of Datasets and Evaluation Criteriamentioning
confidence: 99%