2022
DOI: 10.1109/jiot.2021.3099164
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles

Abstract: In recent years, many deep learning models have been adopted in autonomous driving. At the same time, these models introduce new vulnerabilities that may compromise the safety of autonomous vehicles. Specifically, recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models. Although driving safety is the ultimate concern for autonomous driving, there is no comprehensive study on the linkage between the perfor… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(7 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…PRACTICAL APPLICATIONS Autonomous Driving. Zhang et al [225] proposes an endto-end driving safety evaluation framework with designed performance metrics for the assessment of driving safety, suggesting that it is desirable to evaluate the robustness of deep learning models in terms of driving safety instead of model precision. Rossolini et al [226] focuses on evaluating the robustness of segmentation models to real-world adversarial attacks and demonstrates that these already have a threshold of robustness.…”
Section: Practical Applications and Future Directionsmentioning
confidence: 99%
“…PRACTICAL APPLICATIONS Autonomous Driving. Zhang et al [225] proposes an endto-end driving safety evaluation framework with designed performance metrics for the assessment of driving safety, suggesting that it is desirable to evaluate the robustness of deep learning models in terms of driving safety instead of model precision. Rossolini et al [226] focuses on evaluating the robustness of segmentation models to real-world adversarial attacks and demonstrates that these already have a threshold of robustness.…”
Section: Practical Applications and Future Directionsmentioning
confidence: 99%
“…However, in deep learning, w contains lots of parameters, so it is difficult to directly optimize Equation (3). As the modern deep neural network (DNN) is trained with a batch of data the size of B l + B u [49], the loss of SPSL-3D is simplified as:…”
Section: Self-paced Semi-supervised Learning-based 3d Object Detectionmentioning
confidence: 99%
“…Its task is to identify the classification and predict the 3D bounding box of a targeted object a the traffic scenario. In a word, 3D object detection performance affects the traffic safety of intelligent driving [3]. As 3D object detection requires spatial information from the environment, light detection and ranging (LiDAR) is a suitable sensor because it can generate a 3D point cloud in real-time [4].…”
Section: Introductionmentioning
confidence: 99%
“…In other words, an attacker may design specific attacks to deceive the AI systems by disseminating carefully crafted patterns in the environment and thus induce unexpected behavior of the AV [17]. These deliberate patterns, often referred to as perturbations, are imperceptible to AV users but intense enough to deceive the AI model time and time again [18], [19], [20], and [21]. So, defensive tactics against adversarial attacks have also been the subject of extensive research [22].…”
Section: Introductionmentioning
confidence: 99%