2019
DOI: 10.48550/arxiv.1912.01667
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey of Black-Box Adversarial Attacks on Computer Vision Models

Abstract: Machine learning has seen tremendous advances in the past few years, which has lead to deep learning models being deployed in varied applications of day-to-day life. Attacks on such models using perturbations, particularly in real-life scenarios, pose a severe challenge to their applicability, pushing research into the direction which aims to enhance the robustness of these models. After the introduction of these perturbations by Szegedy et al. [1], significant amount of research has focused on the reliability… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(27 citation statements)
references
References 40 publications
0
27
0
Order By: Relevance
“…Finally, some works [17,35] have shown that it is possible to perform adversarial attacks even in real-world scenarios with physical objects. For a more thorough review of the state of the art, the reader can refer to [1,5,23,36].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, some works [17,35] have shown that it is possible to perform adversarial attacks even in real-world scenarios with physical objects. For a more thorough review of the state of the art, the reader can refer to [1,5,23,36].…”
Section: Related Workmentioning
confidence: 99%
“…Black-box attacks assume that the attacker only knows the outputs of the target DNN, e.g., category labels or confidence scores. They are categorized into three main classes [25]: gradient estimation-based [26]- [28], transferability-based [20], [29], [30], and local search-based methods [31]- [33]. The gradient estimation-based methods perform adversarial attacks by estimating the gradient of the target model.…”
Section: B Black-box Adversarial Attacksmentioning
confidence: 99%
“…These methods aim to detect the adversarial examples or to restore the adversarial input to be closer to the original image space. Adversarial Detection methods (Bhambri et al 2020) include MagNet, Feature Squeezing, and Convex Adversarial Polytope. The MagNet (Meng and Chen 2017) method consists of two parts: detector and reformer.…”
Section: Common Defense Strategiesmentioning
confidence: 99%