Proceedings of the 56th Annual Design Automation Conference 2019 2019
DOI: 10.1145/3316781.3323470
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Machine Learning Beyond the Image Domain

Abstract: Machine learning systems have had enormous success in a wide range of fields from computer vision, natural language processing, and anomaly detection. However, such systems are vulnerable to attackers who can cause deliberate misclassification by introducing small perturbations. With machine learning systems being proposed for cyber attack detection such attackers are cause for serious concern. Despite this the vast majority of adversarial machine learning security research is focused on the image domain. This… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(16 citation statements)
references
References 7 publications
0
16
0
Order By: Relevance
“…This has led to a sub-discipline of adversarial machine learning. In [17,18] we examine the vulnerability of such intrusion detection systems to adversarial attacks. The attacker is able to manipulate the data sent to an IDS and seeks to hide their presence.…”
Section: Discussionmentioning
confidence: 99%
“…This has led to a sub-discipline of adversarial machine learning. In [17,18] we examine the vulnerability of such intrusion detection systems to adversarial attacks. The attacker is able to manipulate the data sent to an IDS and seeks to hide their presence.…”
Section: Discussionmentioning
confidence: 99%
“…This is known as the inverse featuremapping problem [12,32,58]. Many works on problem-space attacks have been explored on different domains: text [3,43], PDFs [22,41,45,46,74], Windows binaries [38,59,60], Android apps [23,31,75], NIDS [6,7,20,28], ICS [76], and Javascript source code [58]. However, each of these studies has been conducted empirically and followed some inferred best practices: while they share many commonalities, it has been unclear how to compare them and what are the most relevant characteristics that should be taken into account while designing such attacks.…”
Section: Related Workmentioning
confidence: 99%
“…valid, inconspicuous member of the considered domain, and robust to non-ML preprocessing. Existing work investigated problem-space attacks on text [3,43], malicious PDFs [12,22,41,45,46,74], Android malware [23,75], Windows malware [38,60], NIDS [6,7,20,28], ICS [76], source code attribution [58], malicious Javascript [27], and eyeglass frames [62]. However, while there is a good understanding on how to perform feature-space attacks [16], it is less clear what the requirements are for an attack in the problem space, and how to compare strengths and weaknesses of existing solutions in a principled way.…”
Section: Introductionmentioning
confidence: 99%
“…As ICSs are commonly attacked by state-sponsored actors, assuming the most knowledgeable adversary is not unrealistic. Related researches [1,4,32] use the white-box threat model as well. If the retraining schedule is not known, the attacker can still apply the same algorithms to calculate the poisoning samples, estimate the maximal retraining period, and increase the intervals between the poison injections to become larger than this estimation.…”
Section: Industrial Control Systemsmentioning
confidence: 99%