2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC) 2022
DOI: 10.1109/itsc55140.2022.9922263
|View full text |Cite
|
Sign up to set email alerts
|

MONA: The Munich Motion Dataset of Natural Driving

Abstract: Real-world datasets facilitate the development of autonomous vehicles, especially when they are accessible, diverse, and provide a measure of accuracy. While existing datasets have been accessible and diverse, they cannot provide any measure of accuracy. To estimate the accuracy of the detection of traffic participants in our setup, we repetitively drove through our observation area with a measurement vehicle with highly accurate localization and LiDAR sensors. Our experiments showed an average overall positio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
0
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 39 publications
0
0
0
1
Order By: Relevance
“…Front-view Image Laneline PV VPG [76] 2017 -20K/20K PV -TUsimple [77] 2017 6.4K 6.4K/128K PV CULane [78] 2018 -133K/133K PV -ApolloScape [14] 2018 235 115K/115K PV LLAMAS [79] 2019 14 79K/100K PV 3D Synthetic [80] 2020 -10K/10K PV -CurveLanes [81] 2020 -150K/150K PV -VIL-100 [82] 2021 100 10K/10K PV OpenLane-V1 [83] 2022 1K 200K/200K 3D ONCE-3DLane [84] 2022 -211K/211K 3D -OpenLane-V2 [85] 。交通灯检测数据集可以被视为一种特定类别的图像 检测数据集。初始的车道线检测数据集 [14, 75∼82] 在二维图像坐标系中检测车道线,然后通过逆透视 变换(Inverse Perspective Mapping,IPM)投影矩阵获得三维车道线。由于 IPM 算法基于路面符 合平面假设的设定,而现实中大多数路面存在高度变化,导致在透视图中表示的车道线在投影到三 维空间的过程中容易出现错误。为解决这个问题,近几年的车道线数据集 [83,84] 提出直接进行三维 车道线检测的任务。由于车道线并不是车道的完备表达,无法包含车道方向与车道之间的连接等关 系,进一步地,OpenLane-V2 [85] 引入了车道的实例级表达方式,并且通过拓扑关系的构建赋予其 连接性及其与交通标识的关联性。建图类数据集的发展使模型预测结果所包含的信息越来越接近高 精地图。 Argoverse [16] [137] , [138] , [139] nuScenes [8] [140] , [141] , [142] Waymo [9] [143] , [144] , [145] Interaction [146] [147] , [148] , [149] MONA [150] Trajectory Comfort nuPlan [18] [151] , [152] , [153] CARLA [30] [154] , [155] , [156] MetaDrive [157] [158] , [159] , [160] Apollo [161] [162] , [163] , [164] Path Planning Maps for Road Network Routes Connecting to Nod...…”
Section: Front-view Gps and Imu And Infrared Camera -unclassified
“…Front-view Image Laneline PV VPG [76] 2017 -20K/20K PV -TUsimple [77] 2017 6.4K 6.4K/128K PV CULane [78] 2018 -133K/133K PV -ApolloScape [14] 2018 235 115K/115K PV LLAMAS [79] 2019 14 79K/100K PV 3D Synthetic [80] 2020 -10K/10K PV -CurveLanes [81] 2020 -150K/150K PV -VIL-100 [82] 2021 100 10K/10K PV OpenLane-V1 [83] 2022 1K 200K/200K 3D ONCE-3DLane [84] 2022 -211K/211K 3D -OpenLane-V2 [85] 。交通灯检测数据集可以被视为一种特定类别的图像 检测数据集。初始的车道线检测数据集 [14, 75∼82] 在二维图像坐标系中检测车道线,然后通过逆透视 变换(Inverse Perspective Mapping,IPM)投影矩阵获得三维车道线。由于 IPM 算法基于路面符 合平面假设的设定,而现实中大多数路面存在高度变化,导致在透视图中表示的车道线在投影到三 维空间的过程中容易出现错误。为解决这个问题,近几年的车道线数据集 [83,84] 提出直接进行三维 车道线检测的任务。由于车道线并不是车道的完备表达,无法包含车道方向与车道之间的连接等关 系,进一步地,OpenLane-V2 [85] 引入了车道的实例级表达方式,并且通过拓扑关系的构建赋予其 连接性及其与交通标识的关联性。建图类数据集的发展使模型预测结果所包含的信息越来越接近高 精地图。 Argoverse [16] [137] , [138] , [139] nuScenes [8] [140] , [141] , [142] Waymo [9] [143] , [144] , [145] Interaction [146] [147] , [148] , [149] MONA [150] Trajectory Comfort nuPlan [18] [151] , [152] , [153] CARLA [30] [154] , [155] , [156] MetaDrive [157] [158] , [159] , [160] Apollo [161] [162] , [163] , [164] Path Planning Maps for Road Network Routes Connecting to Nod...…”
Section: Front-view Gps and Imu And Infrared Camera -unclassified
“…To facilitate benchmarking of motion planning on roads, CommonRoad provides a range of vehicle models and cost functions. To enable the use of more diverse and realistic scenarios, CommonRoad provides dataset converters 2 to convert real-world data from various sources, such as drones [4]- [9], onboard sensors [10], and infrastructure [11], into a unified representation. One can also create handcrafted or generate safety-critical traffic scenarios…”
Section: A Related Workmentioning
confidence: 99%
“…OpenSCENARIO Running example: In the scenario 11 in Fig. 2a, an overtaking Story is specified in the Storyboard.…”
Section: A Openscenario Formatmentioning
confidence: 99%
“…Every pixel in the range images we supply also includes accurate information about the vehicle's attitude, in addition to sensor attributes like as elongation. Since this is the original synchronized dataset with such low-level information, it will facilitate studies of alternative LiDAR input formats to the https://doi.org/10.17993/3ctecno.2023.v12n2e44.49-63 standard 3D point set format [6], [8]. Now, there are 1000 scenarios used for training and validation, along with 150 scenes used for testing; every scene lasts for 20 seconds [6].…”
Section: Introductionmentioning
confidence: 99%