2019
DOI: 10.48550/arxiv.1903.01568
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The H3D Dataset for Full-Surround 3D Multi-Object Detection and Tracking in Crowded Urban Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…With the fast development of the deep network based point cloud processing methods [26], [27], [33] and sparse convolution [21], [13], [14], [16], [15], various algorithms [37], [31], [35], [19] are developed to efficiently detect object in point clouds. The advantages of the LiDAR sensors and the algorithms make them indispensable components for modern autonomous driving systems, algorithms [1], [6], [19], [20], [32], [35], [37], and multi-modality datasets [3], [7], [8], [12], [25]. KITTI [12] provides over 7K annotated samples collected with cameras and LiDAR.…”
Section: Related Workmentioning
confidence: 99%
“…With the fast development of the deep network based point cloud processing methods [26], [27], [33] and sparse convolution [21], [13], [14], [16], [15], various algorithms [37], [31], [35], [19] are developed to efficiently detect object in point clouds. The advantages of the LiDAR sensors and the algorithms make them indispensable components for modern autonomous driving systems, algorithms [1], [6], [19], [20], [32], [35], [37], and multi-modality datasets [3], [7], [8], [12], [25]. KITTI [12] provides over 7K annotated samples collected with cameras and LiDAR.…”
Section: Related Workmentioning
confidence: 99%
“…These datasets are typically created from a stationary surveillance camera [27,37,34], or from aerial views obtained from a static drone-mounted camera [41]. In driving scenes, the 3D point cloud-based datasets [15,36,23,5,1,9] were originally introduced for detection, tracking, etc., but recently used for vehicle trajectory prediction as well. Also, [58,8] provide RGB images captured from an egocentric view of a moving vehicle and applied to future trajectory forecast problem.…”
Section: Datasetsmentioning
confidence: 99%
“…In this process, NEMO generates multi-modal future motions of the target over the uncertainty of future ego-motion, which is reflective of real-world egocentric interactions. For more accurate ego-motion prediction, we release new IMU data for HEV-I [5], which can extend its use far beyond the future object localization problem to visual odometry estimate [14] and other 2D image-based control learning tasks [15], [16]. The updated IMU sensor data will be made available at https://usa.honda-ri.com/hevi…”
Section: Introductionmentioning
confidence: 99%