Proceedings of the 39th International Conference on Computer-Aided Design 2020
DOI: 10.1145/3400302.3415758
|View full text |Cite
|
Sign up to set email alerts
|

Counteracting adversarial attacks in autonomous driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…The proposed defense strategy was based on adversarial training with a novel regularization term, which considered the local smoothness and stereo information. The evaluation results showed that their approach was more effective than a regular adversarial training approach and it improved the detection performance of the original model as well [57].…”
Section: Defense Methods Introduced On Autonomous Vehicles Against Ad...mentioning
confidence: 99%
“…The proposed defense strategy was based on adversarial training with a novel regularization term, which considered the local smoothness and stereo information. The evaluation results showed that their approach was more effective than a regular adversarial training approach and it improved the detection performance of the original model as well [57].…”
Section: Defense Methods Introduced On Autonomous Vehicles Against Ad...mentioning
confidence: 99%
“…These concerns have prompted adversarial attacks, revealing model vulnerabilities. The attacks such as FGSM attack and PGD attack tend to mislead the model to misclassify the object orientation of the vehicle or even could incorrectly perceive class 'Car' as class 'Background' potentially leading to an accident [29] .…”
Section: B Effects Of Adversarial Attacksmentioning
confidence: 99%
“…Such malicious examples can mislead Deep Learning (DL) models into making wrong predictions without being perceived by human beings. Due to the critical threat, adversarial examples have been becoming a great challenge when applying deep learning in security-sensitive scenarios [2]- [7] such as autonomous driving [7]. Compared with white-box adversarial example generation attacks [1], [5], [8]- [11] that allow adversaries to access the architecture and parameters of the target model, blackbox attacks [12]- [20] are more threatening and practical in real applications, where an adversary can only query the target model via application programming interfaces (APIs).…”
Section: Introductionmentioning
confidence: 99%