2019
DOI: 10.1007/978-3-030-31760-7_2
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Examples in Deep Neural Networks: An Overview

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 29 publications
0
11
0
Order By: Relevance
“…Another aspect of AI technical robustness that has been the focus of recent research is the field of adversarial attacks. A comprehensive review of these methods can be found in [60] [61] [62] [63] [64]. The aim of the researchers is to introduce in their models' layers of robustness such that the models are not misled by out of distribution examples, known or unknown attacks and targeted or untargeted attacks.…”
Section: B Technical Robustness and Learning Assurancementioning
confidence: 99%
“…Another aspect of AI technical robustness that has been the focus of recent research is the field of adversarial attacks. A comprehensive review of these methods can be found in [60] [61] [62] [63] [64]. The aim of the researchers is to introduce in their models' layers of robustness such that the models are not misled by out of distribution examples, known or unknown attacks and targeted or untargeted attacks.…”
Section: B Technical Robustness and Learning Assurancementioning
confidence: 99%
“…More recently, proposals have emerged based on training an ML model for a task and then first using that model to classify, then asking humans if that model's confidence is not high enough (Callaghan et al 2018). The effectiveness of such an approach is, consequently, heavily dependent on the reliability of machine confidence, which has shown to be very poor especially for deep learning (Guo et al 2017;Balda, Behboodi, and Mathar 2020).…”
Section: Ai Workflows and The Metrics That Mattermentioning
confidence: 99%
“…In general, AI models are trained on the natural distribution of the data considered in the specific problem (e.g., the distribution of traffic sign images). This distribution, however, lies on a very low-dimensional manifold as compared to the complete input space (e.g., all possible images of the same resolution) (Tanay and Griffin, 2016 ; Balda et al, 2020 ), which is sometimes referred to as the “curse of dimensionality.” Table 1 shows that the size of the input space for some common tasks is extremely large. Even rather simple and academic AI models as e.g., LeNet-5 for handwritten digit recognition have a huge input space.…”
Section: Key Factors Underlying Ai-specific Vulnerabilitiesmentioning
confidence: 99%
“…Besides the arms race in practical attacks and defenses, adversarial attacks have also sparked interest from a theoretical perspective (Goodfellow et al, 2015 ; Tanay and Griffin, 2016 ; Biggio and Roli, 2018 ; Khoury and Hadfield-Menell, 2018 ; Madry et al, 2018 ; Ilyas et al, 2019 ; Balda et al, 2020 ). Several publications deal with their essential characteristics.…”
Section: Key Factors Underlying Ai-specific Vulnerabilitiesmentioning
confidence: 99%