2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018
DOI: 10.1109/itsc.2018.8569519
|View full text |Cite
|
Sign up to set email alerts
|

Unlimited Road-scene Synthetic Annotation (URSA) Dataset

Abstract: In training deep neural networks for semantic segmentation, the main limiting factor is the low amount of ground truth annotation data that is available in currently existing datasets. The limited availability of such data is due to the time cost and human effort required to accurately and consistently label real images on a pixel level. Modern sandbox video game engines provide open world environments where traffic and pedestrians behave in a pseudo-realistic manner. This caters well to the collection of a be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 16 publications
0
16
0
Order By: Relevance
“…The URSA dataset [6] increases the quality of semantic segmentation labels generated from GTA V by labelling each texture which is encountered in the game. This allows unlimited data generation without any extra annotation time.…”
Section: D Scene Understandingmentioning
confidence: 99%
See 1 more Smart Citation
“…The URSA dataset [6] increases the quality of semantic segmentation labels generated from GTA V by labelling each texture which is encountered in the game. This allows unlimited data generation without any extra annotation time.…”
Section: D Scene Understandingmentioning
confidence: 99%
“…This representation of reflectivity was not included in our point cloud generation method. In the future, it could potentially be incorporated by labelling reflectivity of texture values similar to how the URSA dataset [6] labelled textures for semantic segmentation. Another potential solution could be to train a generative adversarial network (GAN) to add reflectivity information to synthetically generated point clouds.…”
Section: E Limitationsmentioning
confidence: 99%
“…Angus et al . [AEK*18] approach the problem from a different perspective by labelling the GTA‐V game world at a constant human annotation time, independently of the extracted data set size. At the same time, real‐time 3D development platforms, that enable automatically generated pixel‐perfect ground truth annotations, were also used to built data generation frameworks employing hand‐modelled virtual cities with different seasons and illumination modes [RSM*16, HJSE*17], semi‐automatic real‐to‐virtual cloning methods [GWCV16] and procedural, physically based modelling [KPSC19].…”
Section: Image Synthesis Methods Overviewmentioning
confidence: 99%
“…In [33], Richter et al used GTA V to capture pixel-by-pixel semantic segmentation using an open source middleware called renderdoc between the game and the GPU. In [34], Angus et al also extracted semantic segmentation images by changing the textures and shaders of the game in the game files. Richter et al generated in [31] a benchmark of several data types from GTA V, all annotated with ground truth data for low-level and high-level vision tasks, including optical flow, instance segmentation, detection and objects tracking, as well as visual odometry.…”
Section: Related Workmentioning
confidence: 99%