2023
DOI: 10.1016/j.ins.2023.03.139
|View full text |Cite
|
Sign up to set email alerts
|

Improving the invisibility of adversarial examples with perceptually adaptive perturbation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 35 publications
0
0
0
Order By: Relevance
“…Wang et al [16] are concerned about the neglect of global perturbation for image content/spatial structure, which can result in leaving obvious artifacts in otherwise clean regions of the original image, and therefore propose to adaptively assign perturbations based on the Just Noticeable Difference (JND) of the human eye by adaptively adjusting the perturbation strength by using the pixelby-pixel perceptual redundancy of the adversarial example as a loss function. Similarly, Zhang et al [17] add the JND of the image as a priori information to the adversarial attack and project the perturbation into the JND space of the original image. Furthermore, they add a visual coefficient to adjust the projection direction of the perturbation to consciously equalize the transferability and invisibility of the adversarial example.…”
Section: Related Work 21 Black-box Adversarial Attacksmentioning
confidence: 99%
“…Wang et al [16] are concerned about the neglect of global perturbation for image content/spatial structure, which can result in leaving obvious artifacts in otherwise clean regions of the original image, and therefore propose to adaptively assign perturbations based on the Just Noticeable Difference (JND) of the human eye by adaptively adjusting the perturbation strength by using the pixelby-pixel perceptual redundancy of the adversarial example as a loss function. Similarly, Zhang et al [17] add the JND of the image as a priori information to the adversarial attack and project the perturbation into the JND space of the original image. Furthermore, they add a visual coefficient to adjust the projection direction of the perturbation to consciously equalize the transferability and invisibility of the adversarial example.…”
Section: Related Work 21 Black-box Adversarial Attacksmentioning
confidence: 99%