2018
DOI: 10.3390/make1010011
|View full text |Cite
|
Sign up to set email alerts
|

An Algorithm for Generating Invisible Data Poisoning Using Adversarial Noise That Breaks Image Classification Deep Learning

Abstract: Today, the main two security issues for deep learning are data poisoning and adversarial examples. Data poisoning consists of perverting a learning system by manipulating a small subset of the training data, while adversarial examples entail bypassing the system at testing time with low-amplitude manipulation of the testing sample. Unfortunately, data poisoning that is invisible to human eyes can be generated by adding adversarial noise to the training data. The main contribution of this paper includes a succe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(19 citation statements)
references
References 19 publications
0
19
0
Order By: Relevance
“…Determine robustness: A major risk occurs if ML applications are not robust (in terms of not being able to perform appropriately), if data is perturbed, e.g., noisy or wrong, or manipulated in advance to fool the system (e.g., adversarial attacks) as shown by Chang [131]. This requires methods to statistically estimate the model's local and global robustness.…”
Section: Discussionmentioning
confidence: 99%
“…Determine robustness: A major risk occurs if ML applications are not robust (in terms of not being able to perform appropriately), if data is perturbed, e.g., noisy or wrong, or manipulated in advance to fool the system (e.g., adversarial attacks) as shown by Chang [131]. This requires methods to statistically estimate the model's local and global robustness.…”
Section: Discussionmentioning
confidence: 99%
“…Determine robustness: A major risk occurs if ML applications are not robust to perturbed, e.g. noisy or wrong, or even designed adversarial input data as show by Chan-Hon-Tong [111]. This requires methods to statistically estimate the model's local and global robustness.…”
Section: Discussionmentioning
confidence: 99%
“…A smaller but non negligible issue is poisoning [17], [18]. Data poisoning (which also works [19] on support vector machine SVM [20]) is known as the goal of finding small modification of training data (testing data being unchanged) changing the model behaviour on test e.g.…”
Section: B Poisoningmentioning
confidence: 99%
“…δ is constrained by a L 1 norm, and, goal of the hacker is to take advantage of sensibility to small perturbation: goal is to produce symmetric poisoning with only small modification of all training data instead of heavy modification of few training samples. Typically [18] introduces a symmetric adversarial poisoning attack (SAP) based on energetic landscape hacking.…”
Section: Adversarial Poisoningmentioning
confidence: 99%
See 1 more Smart Citation