2022 IEEE Intelligent Vehicles Symposium (IV) 2022
DOI: 10.1109/iv51971.2022.9827222
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attack and Defense of YOLO Detectors in Autonomous Driving Scenarios

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…Much research focuses on assessing and improving AV's abilities to recognize and respond to their environments (Hoss et al, 2022). Current efforts predominantly utilize adversarial attacks to test perception modules, identifying deficiencies in traffic participant recognition (Tang et al, 2023), such as tests targeting visual recognition (Chen et al, 2019;Zhao et al, 2019b;Im Choi and Tian, 2022), LiDAR detection (Li et al, 2021;Zhu et al, 2021a,b), and perception fusion (Zhong et al, 2022). Other perception module tests include metamorphic testing (Zhou and Sun, 2019;Wang et al, 2021;Ramanagopal et al, 2018) and combinatorial testing (Gladisch et al, 2020;Cheng et al, 2018), involving sensor data processing and analysis, and vehicular responses to different traffic scenarios, obstacles, and environmental factors.…”
Section: Perception Testingmentioning
confidence: 99%
“…Much research focuses on assessing and improving AV's abilities to recognize and respond to their environments (Hoss et al, 2022). Current efforts predominantly utilize adversarial attacks to test perception modules, identifying deficiencies in traffic participant recognition (Tang et al, 2023), such as tests targeting visual recognition (Chen et al, 2019;Zhao et al, 2019b;Im Choi and Tian, 2022), LiDAR detection (Li et al, 2021;Zhu et al, 2021a,b), and perception fusion (Zhong et al, 2022). Other perception module tests include metamorphic testing (Zhou and Sun, 2019;Wang et al, 2021;Ramanagopal et al, 2018) and combinatorial testing (Gladisch et al, 2020;Cheng et al, 2018), involving sensor data processing and analysis, and vehicular responses to different traffic scenarios, obstacles, and environmental factors.…”
Section: Perception Testingmentioning
confidence: 99%
“…Nassi et al [37] used a projector to project misleading traffic signs, which caused the ADSs to recognize the deceptive signs as real. Similarly, many other researchers focused on different modules to attack, such as road sign recognition [38,39], image recognition [40,41], object detection [42,43], and traffic sign recognition [44][45][46]. Other than the adversarial attacks mentioned above, false data injection [47] and denial of service attacks [48] are well-known attacks on autonomous driving systems.…”
Section: Adversarial Attacks On Autonomous Driving Systemsmentioning
confidence: 99%
“…One of the most commonly used object-detection models is You Only Look Once (YOLO) (Redmon et al, 2016), which is based on a single convolutional neural network (CNN) that simultaneously predicts the class and location of objects in an image. Several studies have shown that YOLO is vulnerable to adversarial attacks (Liu et al, 2018;Im Choi & Tian, 2022;Thys et al, 2019;Hu et al, 2021). For example, targeted perturbation attacks can be used to modify an input image in a way that causes YOLO to misidentify or fail to detect certain objects.…”
Section: Previous Workmentioning
confidence: 99%