2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
DOI: 10.1109/iros47612.2022.9981724
|View full text |Cite
|
Sign up to set email alerts
|

How Do We Fail? Stress Testing Perception in Autonomous Vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…Piroli et al [69] detected the presence of rain and snow using an energy-based anomaly detection framework. Li et al [70] evaluated and modeled LiDAR visibility under different artificial fog conditions, while Delecki et al [71] increased pressure on the recognition model by gradually adding computer-synthesized rain, snow, and fog to analyze the causes of recognition failures.…”
Section: Three-dimensional Environmental Perception In Adverse Weathermentioning
confidence: 99%
“…Piroli et al [69] detected the presence of rain and snow using an energy-based anomaly detection framework. Li et al [70] evaluated and modeled LiDAR visibility under different artificial fog conditions, while Delecki et al [71] increased pressure on the recognition model by gradually adding computer-synthesized rain, snow, and fog to analyze the causes of recognition failures.…”
Section: Three-dimensional Environmental Perception In Adverse Weathermentioning
confidence: 99%
“…In [28], researchers utilize a GAN model to produce adversarial objects that can be used to attack LiDAR-based driving systems. In [29], the authors propose a methodology for performing stress tests on LiDAR-based perception. The researchers use a real-world driving dataset and various weather conditions to evaluate the performance of autonomous driving systems.…”
Section: A Adversarial Noise Attacksmentioning
confidence: 99%
“…The performance evaluation of automotive data fusion often requires summarizing aspects of perceptual performance into a small number of scalar values for comparison [33]. Computing lower-level metrics requires associating the estimated tracks of the System-Under-Test to their corresponding reference tracks, which is realized in the following way: the pairwise distances between all estimated and all reference objects or tracks are computed using an object distance function [34,35]. In this paper, we use the Euclidean distance τ of the observation point of the MMW radar and the center point of the camera sensor to associate them and then use the ratio of τ to the vehicle length W as the threshold for fault classification.…”
mentioning
confidence: 99%