2018
DOI: 10.48550/arxiv.1810.08280
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploring Adversarial Examples in Malware Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 0 publications
0
15
0
Order By: Relevance
“…The concept of using adversarial learning for generating malicious adversarial samples was previously investigated by [20,38,39,44] for generating malware that evade machine learning-based antimalware software and [14,17] for evading text classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…The concept of using adversarial learning for generating malicious adversarial samples was previously investigated by [20,38,39,44] for generating malware that evade machine learning-based antimalware software and [14,17] for evading text classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…Adversarial Machine Learning Attacks: As analyzed in the previous section, majority of the defense solutions use ML. While the utilization of ML techniques increases the accuracy and enables to effectively detect never-before-seen ransomware samples, recent studies showed that ML-based classifiers are vulnerable attacks that may manipulate either the training data or test data to bypass detection [177]. Such attacks are called as Adversarial ML attacks, and have been applied not only in the computer vision domain, but also other domains including malware.…”
Section: Comprehensiveness Of Defense Solutionsmentioning
confidence: 99%
“…The Random Append attack and Gradient Append attacks are two types of append attacks which work by appending byte values from a uniform distribution sample and gradually modifying the appended byte values using the input gradient value. Two additional variations of append attacks; the benign append and the FGM Append were introduced by [65] which improves the long convergence time experienced in previous attacks.…”
Section: A Adversarial Attacks On ML For Endpoint Protectionmentioning
confidence: 99%
“…When malware binaries have exceeded the model's maximum size, it is impossible to append additional bytes to them. Hence a slack attack proposed by [65] exploits the existing bytes of the malware binaries. The most common form of the slack attack is the Slack FGM Attack which defines a set of slack bytes that can be freely modified without breaking the malware functionality.…”
Section: A Adversarial Attacks On ML For Endpoint Protectionmentioning
confidence: 99%