2020
DOI: 10.48550/arxiv.2010.16323
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers

Abstract: The performance of a machine learning-based malware classifier depends on the large and updated training set used to induce its model. In order to maintain an up-to-date training set, there is a need to continuously collect benign and malicious files from a wide range of sources, providing an exploitable target to attackers. In this study, we show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier. The attacker's ultimate goal is t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…In addition, it is almost impossible to control and manipulate the labeling process for the poisoning data samples for the adversary, since most of training samples normally are labeled with multiple independent anti-virus engines by security companies and further used to trained the malware detection models. Therefore, considering the practicality of the attack scenario in the wild, both following [119] and [122] belong to the clean-label poisoning attack [120], in which the adversary can control manipulate the poison instance itself, but cannot control the labeling of the poison instance.…”
Section: Training-time Poisoningmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, it is almost impossible to control and manipulate the labeling process for the poisoning data samples for the adversary, since most of training samples normally are labeled with multiple independent anti-virus engines by security companies and further used to trained the malware detection models. Therefore, considering the practicality of the attack scenario in the wild, both following [119] and [122] belong to the clean-label poisoning attack [120], in which the adversary can control manipulate the poison instance itself, but cannot control the labeling of the poison instance.…”
Section: Training-time Poisoningmentioning
confidence: 99%
“…Shapira et al [122] argue that the attack assumption of feature-space manipulations in [119] is unrealistic and unreasonable for real-world malware classifiers. In pursuit of a poisoning attack in a problem space, Shapira et al propose a novel instance poisoning attack by first selecting the goodware that are most similar to the target malware instance and then adding sections to the goodware for adversarially training the poisoned model.…”
Section: Training-time Poisoningmentioning
confidence: 99%
“…• User-installed Malwares Some AVs allow passengers to install user applications into the onboard entertainment system. When malware [88][89][90][91] is accidentally installed, it may obtain privileged permission and a certain level of control over the vehicle.…”
Section: Security Impact On the Av Safetymentioning
confidence: 99%