2022
DOI: 10.1109/lra.2022.3192802
|View full text |Cite
|
Sign up to set email alerts
|

V2X-Sim: Multi-Agent Collaborative Perception Dataset and Benchmark for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
68
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 128 publications
(70 citation statements)
references
References 48 publications
0
68
0
2
Order By: Relevance
“…1, we can see that with our uncertainty quantification method, detected objects with low accuracy tend to have large uncertainties, and the constructed uncertainty set covers the ground-truth bounding box in most cases. Compared with the state-of-thearts [18], [19], our Double-M Quantification method achieves up to 4× improvement on uncertainty score and up to 3.13% accuracy improvement on the comprehensive collaborative perception dataset, V2X-SIM [1].…”
Section: Introductionmentioning
confidence: 95%
See 1 more Smart Citation
“…1, we can see that with our uncertainty quantification method, detected objects with low accuracy tend to have large uncertainties, and the constructed uncertainty set covers the ground-truth bounding box in most cases. Compared with the state-of-thearts [18], [19], our Double-M Quantification method achieves up to 4× improvement on uncertainty score and up to 3.13% accuracy improvement on the comprehensive collaborative perception dataset, V2X-SIM [1].…”
Section: Introductionmentioning
confidence: 95%
“…Multi-agent collaborative object detection has been proposed to leverage the viewpoints of other agents to improve the detection accuracy compared with the individual viewpoint [1]. Recent research has shown the effectiveness of early, late, and intermediate fusion of collaborative detection, which respectively transmits raw data, output bounding boxes, and intermediate features [2], [3], [4], and the improved collaborative object detection results will benefit the self-driving decisions of connected and autonomous vehicles (CAVs) [5].…”
Section: Introductionmentioning
confidence: 99%
“…We validate our SyncNet on LIDAR-based 3D object detection task [3,9] with a multi-agent dataset, V2X-Sim [14]. V2X-Sim is built with the co-simulation of SUMO [12] and CARLA [7].…”
Section: Multi-agent 3d Object Detection Datasetmentioning
confidence: 99%
“…We extensively evaluate the novel latency-aware collaborative perception system with SyncNet on V2X-Sim dataset [14] on collaborative 3D object detection for autonomous driving. The results verify the robustness of our system and show substantial improvements over state-of-the-art approaches.…”
Section: Introductionmentioning
confidence: 99%
“…Sooner or later, these autonomous systems will be deployed on roads at scale, opening up opportunities for cooperation between them. Previous works in [4], [5], [6], [7], [8], [9], [10], [11], [12] have demonstrated that by leveraging the Vehicle-to-Everything (V2X) communication technology, AVs and infrastructure can perform cooperative perception by using the shared sensing information and thus significantly enhance the perception performance. Despite the remarkable improvement, these works evaluate the proposed systems on the dataset with natural scenarios that do not contain sufficient safety-critical scenes.…”
Section: Introductionmentioning
confidence: 99%