2020
DOI: 10.1609/aaai.v34i07.6918
|View full text |Cite
|
Sign up to set email alerts
|

Heuristic Black-Box Adversarial Attacks on Video Recognition Models

Abstract: We study the problem of attacking video recognition models in the black-box setting, where the model information is unknown and the adversary can only make queries to detect the predicted top-1 class and its probability. Compared with the black-box attack on images, attacking videos is more challenging as the computation cost for searching the adversarial perturbations on a video is much higher due to its high dimensionality. To overcome this challenge, we propose a heuristic black-box attack model that genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
60
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 64 publications
(60 citation statements)
references
References 15 publications
0
60
0
Order By: Relevance
“…Adversarial attack on videos: Wei et al [47] claimed that they are the first to attack videos. Instead of attacking each frame of a video, they apply additive perturbations on randomly selected frames and use l 2,1 norm to guide the Flickering [29] RL [57] Heuristic [48] Append [4] BlackBox attack [15] GAN-based attack [24] Sparse Attack [ gradient-based optimisation and evaluated the performance on the CNN+LSTM model. Li et al [24] used a GAN network to generate offline universal perturbations for each frame.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial attack on videos: Wei et al [47] claimed that they are the first to attack videos. Instead of attacking each frame of a video, they apply additive perturbations on randomly selected frames and use l 2,1 norm to guide the Flickering [29] RL [57] Heuristic [48] Append [4] BlackBox attack [15] GAN-based attack [24] Sparse Attack [ gradient-based optimisation and evaluated the performance on the CNN+LSTM model. Li et al [24] used a GAN network to generate offline universal perturbations for each frame.…”
Section: Related Workmentioning
confidence: 99%
“…Jiang et al [15] were the first to propose a black-box approach to attack videos. Wei et al [48] proposed to use a heuristic method and Yan et al [57] used a reinforcement learning algorithm to select the key frames to perform black-box attack. However, these works only applied additive perturbation based on l p -norm distance.…”
Section: Related Workmentioning
confidence: 99%
“…Jiang et al [12] proposed an adversarial attack on a video recognition model under a black-box setting. Wei et al [45] proposed a method that adds perturbation to selected keyframes and salient regions. Our work differs from the above-mentioned adversarial action recognition works such that our work is on multi-modal settings and includes attention mechanisms that efficiently and selectively choose segments within individual modality and among multi-modalities to carry out an intelligently adversarial attack.…”
Section: Adversarial Attackmentioning
confidence: 99%
“…Many adversarial attack methods have been proposed for some computer vision tasks [2,45], but not on action recognition applications. Complexity arisen from multiple modalities of action recognition, such as RGB, Depth, and Skeleton, adds to the challenges of adversarial attacks on such a task.…”
Section: Introductionmentioning
confidence: 99%
“…However, recent studies have shown that deep neural networks (DNNs) are highly vulnerable to adversarial examples [7,23], which are generated by adding small human-imperceptible perturbations that can lead to wrong predictions. The existence of adversarial examples has posed serious security threats for the application of DNNs in security-critical scenarios, such as autonomous driving [20], face recognition [5], video analysis [26], etc. As a result, adversarial examples have attracted numerous research attentions in recent years.…”
Section: Introductionmentioning
confidence: 99%