2021
DOI: 10.3390/app11115235
|View full text |Cite
|
Sign up to set email alerts
|

Methods for Preventing Visual Attacks in Convolutional Neural Networks Based on Data Discard and Dimensionality Reduction

Abstract: The article is devoted to the study of convolutional neural network inference in the task of image processing under the influence of visual attacks. Attacks of four different types were considered: simple, involving the addition of white Gaussian noise, impulse action on one pixel of an image, and attacks that change brightness values within a rectangular area. MNIST and Kaggle dogs vs. cats datasets were chosen. Recognition characteristics were obtained for the accuracy, depending on the number of images subj… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 36 publications
0
7
0
Order By: Relevance
“…They can serve as an important inductive bias for out-of-distribution samples or regularize the model to avoid potential overfitting. The virtue that learning concepts from data can prevent adversarial attacks is also discussed [40]. Our method is based on the assumption of prototype theory [11] and the hierarchical structure of prototypes and concepts [24,25].…”
Section: Discussionmentioning
confidence: 99%
“…They can serve as an important inductive bias for out-of-distribution samples or regularize the model to avoid potential overfitting. The virtue that learning concepts from data can prevent adversarial attacks is also discussed [40]. Our method is based on the assumption of prototype theory [11] and the hierarchical structure of prototypes and concepts [24,25].…”
Section: Discussionmentioning
confidence: 99%
“…In the case of training DCNN models with scarce data, augmentation can effectively extend the performance of DCNN classifiers, avoid overtraining [ 21 ], and reduce the possibility of visual attack [ 22 ]. Various mathematical models can augment models, which can generate close to real signals and images [ 23 , 24 ].…”
Section: Related Workmentioning
confidence: 99%
“…For instance, in the specific problem of image category labeling, unsupervised deep learning approaches have been found to be powerful [23]. Many other attacks on the problem are possible, including hidden Markov models with Bayesian expectationmaximization [24,25] and dimensionality reduction with PCA [26], among others [27]. We investigate minimizing the energy flow objective function (Definition 1) over unlabeled data sets to obtain Hopfield networks that cluster them.…”
Section: Unsupervised Clusteringmentioning
confidence: 99%