2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8917089
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Generation of Training Data for Image Classification of Road Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…In [25], the average and the variance of the normalized height difference are used as features to determine which LiDAR3D points belong to the surface and those considered as curb candidates. This feature has been applied to LiDAR3D data in [17,19,22,24,33,41,50,51,55,59,62,65].…”
Section: Height Stepmentioning
confidence: 99%
See 3 more Smart Citations
“…In [25], the average and the variance of the normalized height difference are used as features to determine which LiDAR3D points belong to the surface and those considered as curb candidates. This feature has been applied to LiDAR3D data in [17,19,22,24,33,41,50,51,55,59,62,65].…”
Section: Height Stepmentioning
confidence: 99%
“…Similarly, normals have been computed on depth images in [30] while surface normals on the 3D information obtained from a disparity map has been discussed in [65]. In [59], the normal orientation is used to verify that curb candidate points separate two horizontal planes representing the sidewalk and the road.…”
Section: Normal Orientationmentioning
confidence: 99%
See 2 more Smart Citations
“…Some researchers have also begun to use other sensor modalities to automatically label training data. Kuhner et al [11] propose the use of LiDAR in driverless cars to automatically create semantic labels for the image data generated by the car's cameras. Their approach quickly annotates images of roads and curbs for use in training neural networks used to detect and navigate around these objects.…”
Section: Related Workmentioning
confidence: 99%