2019 1st International Informatics and Software Engineering Conference (UBMYK) 2019
DOI: 10.1109/ubmyk48245.2019.8965459
|View full text |Cite
|
Sign up to set email alerts
|

Preventing Data Poisoning Attacks By Using Generative Models

Abstract: At the present time, machine learning methods have been becoming popular and the usage areas of these methods have also increased with this popularity. The machine learning methods are expected to increase in the cyber security components like firewalls, antivirus software etc. Nowadays, the use of this type of machine learning methods brings with it various risks. Attackers develop different methods to manipulate different systems, not only cyber security components, but also image detection systems. Therefor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

4
3

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 6 publications
0
13
0
Order By: Relevance
“…However, the attacker must solve an optimization problem to decide which regions in the input data must be changed to prevent this manipulation from being easily noticed by the human eye. By solving this optimization problem using one of the available attack methods [1,19,30,40], the attacker aims to reduce the classification performance of the model on the adversarial data as much as possible. In this study, to limit the maximum allowed perturbation for the attacker, we used ∞ norm, which is the maximum pixel difference limit between original and adversarial images.…”
Section: Capability Of the Attackermentioning
confidence: 99%
See 1 more Smart Citation
“…However, the attacker must solve an optimization problem to decide which regions in the input data must be changed to prevent this manipulation from being easily noticed by the human eye. By solving this optimization problem using one of the available attack methods [1,19,30,40], the attacker aims to reduce the classification performance of the model on the adversarial data as much as possible. In this study, to limit the maximum allowed perturbation for the attacker, we used ∞ norm, which is the maximum pixel difference limit between original and adversarial images.…”
Section: Capability Of the Attackermentioning
confidence: 99%
“…We have released our source code on GitHub. 1 To sum up; our main contributions with this paper are:…”
Section: Introductionmentioning
confidence: 99%
“…Adversarial machine learning is an attack technique that attempts to fool neural network models by supplying craftily manipulated input with a small difference [23]. Attackers apply model evasion attacks for phishing attacks, spams, and executing malware code in an analysis environment [24]. There are also some advantages to attackers in misclassification and misdirection of models.…”
Section: Attack To Machine Learning Algorithms: Adversarial Machine L...mentioning
confidence: 99%
“…However, to prevent this noise addition from being easily noticed, the attacker must answer an optimization problem to determine which regions in the input data (i.e., beamforming) that must be modified. By solving this optimization problem using one of the available attack, methods [24], the attacker aims to reduce the prediction performance on the manipulated data as much as possible. In this study, to limit the maximum allowed perturbation allowed for the attacker, we used l ∞ norm, which is the maximum difference limit between original and adversarial instances.…”
Section: Capability Of the Attackermentioning
confidence: 99%
“…We have selected MNIST dataset that consists of numbers from people's handwriting to provide people to understand and see changes in the data. In our previous works [3,38], we applied the generative models both for data and model poisoning attacks with limited datasets.…”
Section: Introductionmentioning
confidence: 99%