2021
DOI: 10.1186/s42400-021-00079-5
|View full text |Cite
|
Sign up to set email alerts
|

DeepMal: maliciousness-Preserving adversarial instruction learning against static malware detection

Abstract: Outside the explosive successful applications of deep learning (DL) in natural language processing, computer vision, and information retrieval, there have been numerous Deep Neural Networks (DNNs) based alternatives for common security-related scenarios with malware detection among more popular. Recently, adversarial learning has gained much focus. However, unlike computer vision applications, malware adversarial attack is expected to guarantee malwares’ original maliciousness semantics. This paper proposes a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…A variation on the method introduced in [25] was presented in 2021 by Yang et al [27]. The authors treated the input EXEs as images, used as input into a convolution neural network.…”
Section: Gradient-based Attacksmentioning
confidence: 99%
“…A variation on the method introduced in [25] was presented in 2021 by Yang et al [27]. The authors treated the input EXEs as images, used as input into a convolution neural network.…”
Section: Gradient-based Attacksmentioning
confidence: 99%
“…While some studies have addressed adversarial learning and its potential threats [34], [181], this area remains relatively unexplored within the ransomware detection domain. As ransomware detection methods continue to advance, it's expected that future ransomware will exploit adversarial learning techniques to evade detection by mimicking benign behaviours learned from machine learning models.…”
Section: ) Adversarial Learningmentioning
confidence: 99%
“…The model used a fuzzy algorithm, a mathematical technique used to handle uncertainty, approximate reasoning and achieved an accuracy of 98.20%. Yang et al [28] introduced an innovative approach known as the adversarial instruction technique, which leverages machine learning and deep learning methods for Android malware analysis. The researchers applied this technique to three common types of malware samples: Trojan horses, Ransomware, and Backdoors, while ensuring the original attack functionality of these malware categories was maintained.…”
Section: Related Workmentioning
confidence: 99%