2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00210
|View full text |Cite
|
Sign up to set email alerts
|

Generation & Evaluation of Adversarial Examples for Malware Obfuscation

Abstract: There has been an increased interest in the application of convolutional neural networks for image based malware classification, but the susceptibility of neural networks to adversarial examples allows malicious actors to evade classifiers. Adversarial examples are usually generated by adding small perturbations to the input that are unrecognizable to humans, but the same approach is not effective with malware. In general, these perturbations cause changes in the byte sequences that change the initial function… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(19 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…al. show that these simpler transformations are still effective against machine learning based malware detection and classification models [26].…”
Section: B Obfuscationmentioning
confidence: 99%
See 1 more Smart Citation
“…al. show that these simpler transformations are still effective against machine learning based malware detection and classification models [26].…”
Section: B Obfuscationmentioning
confidence: 99%
“…In this section, we will review the obfuscation techniques implemented in this study by summarizing definitions laid out in [26], [27]. The transformations that we considered are only those which create variants of themselves, affecting the sequence of opcodes in a binary.…”
Section: B Obfuscationmentioning
confidence: 99%
“…Differently, to evade visualization based malware detectors, Park et al [96] propose another adversarial attack based on the adversarial malware alignment obfuscation (AMAO) algorithm. Specifically, a non-executable adversarial image is first generated by the off-the-shelf adversarial attacks in the field of image classification [14,46].…”
Section: White-box Adversarial Attacks Against Pe Malware Detectionmentioning
confidence: 99%
“…Second, regarding selected attack methods, a few notable attack methods include simple append attack [9], attacking using randomly generated perturbation [4], and attacking using specific perturbations that lowers a malware detector's score [5]. More advanced methods incorporate machine learning techniques (Genetic Programming [1] [6], Gradient Descent [3], and Dynamic Programming [7]) and implement advanced DL-based techniques (Generative Adversarial Networks [8], Deep Reinforcement Learning [10], and Generative Recurrent Neural Networks [2] [11]). Third, and most importantly, while a sizable amount of AMG research either do not limit the number of queries to the malware detector or allow conducting multiple queries, few studies (Suciu et al [9]) operate in a single-shot AMG evasion setting.…”
Section: A Adversarial Malware Generation (Amg)mentioning
confidence: 99%