2019
DOI: 10.1016/j.cosrev.2019.100199
|View full text |Cite
|
Sign up to set email alerts
|

A taxonomy and survey of attacks against machine learning

Abstract: The majority of machine learning methodologies operate with the assumption that their environment is benign. However, this assumption does not always hold, as it is often advantageous to adversaries to maliciously modify the training (poisoning attacks) or test data (evasion attacks). Such attacks can be catastrophic given the growth and the penetration of machine learning applications in society. Therefore, there is a need to secure machine learning enabling the safe adoption of it in adversarial cases, such … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
121
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 188 publications
(121 citation statements)
references
References 55 publications
0
121
0
Order By: Relevance
“…Adversarial attacks are classified into evasion and positing attacks [15]. Our approach confirms to evasion attack to detour the detection model.…”
Section: Adversarial Attackmentioning
confidence: 97%
“…Adversarial attacks are classified into evasion and positing attacks [15]. Our approach confirms to evasion attack to detour the detection model.…”
Section: Adversarial Attackmentioning
confidence: 97%
“…Deep learning gathered significant interest and its applications are being explored inside many research areas, for example healthcare, automotive design and law implementa- tion. There are likewise several existing works inside the area of NIDS in SDN [17], [18].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Adversarial Behaviour: The overall goal of the framework is to detect malicious attacks (e.g., Botnets) in the presence of adversarial users [7] who generate malicious traffic to set the system perception to a faked value. We assume that adversaries collaborate to attack the data collection process.…”
Section: A Lstm Modelsmentioning
confidence: 99%
“…In the machine learning community, security problems have already been addressed in the form of adversarial machine learning [7]. For instance, novelty detection has been addressed to detect anomaly in acoustic data [8].…”
Section: Introductionmentioning
confidence: 99%