2021
DOI: 10.48550/arxiv.2108.00401
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Advances in adversarial attacks and defenses in computer vision: A survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 0 publications
1
9
0
Order By: Relevance
“…This indicates an increasing interest of the research community in this direction. This trend is inline with the literature in a closely related research direction of adversarial attacks on deep learning [108]. We conjecture that this ever increasing research activity in these directions is a natural consequence of awareness of vulnerabilities of deep learning in adversarial setups.…”
Section: Discussion and Future Outlooksupporting
confidence: 69%
See 3 more Smart Citations
“…This indicates an increasing interest of the research community in this direction. This trend is inline with the literature in a closely related research direction of adversarial attacks on deep learning [108]. We conjecture that this ever increasing research activity in these directions is a natural consequence of awareness of vulnerabilities of deep learning in adversarial setups.…”
Section: Discussion and Future Outlooksupporting
confidence: 69%
“…Hence, there is still considerable opportunity to explore new sub-topics in this direction. A guide to such an exploration is provided by the sister problem of adversarial attacks on deep learning [108]. The discovery of adversarial attacks was made in 2013 -a few year earlier than identification of neural Trojans.…”
Section: Discussion and Future Outlookmentioning
confidence: 99%
See 2 more Smart Citations
“…Deep Neural Networks (DNNs) have showcased their superior performance in visual understanding tasks like image classification and video recognition [10]. Adversarial attacks could impact both image and video elements of data, causing the machine learning model to make an inaccurate prediction with a high level of confidence [11]. Commonly, a majority of the attacks in the healthcare domain target medical imaging data (e.g., X-rays, CT scans, radiographs, MRI's, etc.)…”
Section: Introductionmentioning
confidence: 99%