2023
DOI: 10.48550/arxiv.2302.14267
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attack with Raindrops

Abstract: Deep neural networks (DNNs) are known to be vulnerable to adversarial examples, which are usually designed artificially to fool DNNs, but rarely exist in real-world scenarios. In this paper, we study the adversarial examples caused by raindrops, to demonstrate that there exist plenty of natural phenomena being able to work as adversarial attackers to DNNs. Moreover, we present a new approach to generate adversarial raindrops, denoted as AdvRD, using the generative adversarial network (GAN) technique to simulat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Introduced in [37], FID originally served as a metric to evaluate the performance of GANs by assessing the similarity of generated images. FID is one of the recent tools for assessing the visual quality of adversarial images and it aligns closely with human judgment (see [38][39][40]). On the other hand, [41,42] provide an assessment of L p -norms as a measure of perceptual distance between images.…”
Section: Assessment Of the Human Perception Of Distinct Imagesmentioning
confidence: 99%
“…Introduced in [37], FID originally served as a metric to evaluate the performance of GANs by assessing the similarity of generated images. FID is one of the recent tools for assessing the visual quality of adversarial images and it aligns closely with human judgment (see [38][39][40]). On the other hand, [41,42] provide an assessment of L p -norms as a measure of perceptual distance between images.…”
Section: Assessment Of the Human Perception Of Distinct Imagesmentioning
confidence: 99%
“…The goal of these works is to obtain the minimum perturbation by optimising the generation while being able to obtain robust adversarial effects. In recent studies, the raindrop attack has been introduced as a notable technique for assessing the efficacy of using simulated raindrops as perturbations [35,36]. This approach enables researchers to evaluate the impact of raindrop-like perturbations on deep neural networks and subsequently develop defence mechanisms to enhance the DNNs' resilience against such attacks.…”
Section: Digital Attacksmentioning
confidence: 99%
“…Some recent works have also focused on other image attributes, such as color, texture, and camouflage, to generate attacks [26][27][28][29][30]. Additionally, some scholars [31] [32] have utilized digital synthesis methods to simulate raindrops to deploy adversarial attacks, subsequently using these generated adversarial samples to improve the model's robustness. These works have been primarily conducted in digital environments, where attackers can directly modify input images to generate adversarial samples.…”
Section: Adversarial Attacks In the Visible Light Fieldmentioning
confidence: 99%