2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018
DOI: 10.1109/itsc.2018.8569433
|View full text |Cite
|
Sign up to set email alerts
|

Object Detection and Classification in Occupancy Grid Maps Using Deep Convolutional Networks

Abstract: Detailed environment perception is a crucial component of automated vehicles. However, to deal with the amount of perceived information, we also require segmentation strategies. Based on a grid map environment representation, well-suited for sensor fusion, free-space estimation and machine learning, we detect and classify objects using deep convolutional neural networks. As input for our networks we use a multi-layer grid map efficiently encoding 3D range sensor information. The inference output consists of a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 69 publications
(48 citation statements)
references
References 19 publications
0
48
0
Order By: Relevance
“…Furthermore, we validate uncertainty estimation during the RPN and the output stage for different models and analyze their differences. We evaluate our work on the KITTI Bird's Eye View Evaluation benchmark allowing comparison to previously published work [8]. Finally, we reduce the number of box parameters and represent pose and shape uncertainty by a common shape based on collision probabilities.…”
Section: B Uncertainties In Object Detectionmentioning
confidence: 99%
“…Furthermore, we validate uncertainty estimation during the RPN and the output stage for different models and analyze their differences. We evaluate our work on the KITTI Bird's Eye View Evaluation benchmark allowing comparison to previously published work [8]. Finally, we reduce the number of box parameters and represent pose and shape uncertainty by a common shape based on collision probabilities.…”
Section: B Uncertainties In Object Detectionmentioning
confidence: 99%
“…The success of these applications are mainly due to the fact that a large amount of datasets has become available in the last years. In the context of intelligent vehicles, interesting work has been developed in several different fields: trajectory prediction [5] [6], mapping [7], control [10] and even end-to-end approaches [8] [9], where the car is controlled completely by a Deep Learning module.…”
Section: Related Workmentioning
confidence: 99%
“…The bird's eye (zenithal) view I BE is obtained over an area of 60 × 50 meters in front of the LiDAR sensor after carefully observing that roughly 95% of the annotated vehicles in Kitti are within these margins. Inspired by [5], [16], we generate a 2D grid with a resolution of 0.1 meters and project the cropped point cloud on it. We consequently obtain a bird's eye view I BE ∈ R H ×W ×C , where H = 600, W = 500, and C = 6, accounting for six different features: 1) a binary occupancy term with zero value if no points are projected in the cell and one otherwise; 2) an absolute occupancy term, counting the total number of points in the cell; 3) the mean reflectivity value of the points on the cell; and 4, 5, and 6) the mean, minimum and maximum height values of the points projected on the cell.…”
Section: A Movable Objects Segmentationmentioning
confidence: 99%
“…Similarly, [15] creates a front view projection of the polar LiDAR coordinates along with the reflectivity of each point, to segment vehicles by predicting the vehicleness confidence of each point. On the other hand, BirdNet [5], TopNet [16] or RT3D [17] make use of a bird's eye view projection of the point cloud, encoding different features on each cell.…”
Section: Introductionmentioning
confidence: 99%