2021
DOI: 10.48550/arxiv.2102.06747
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Realizable Universal Adversarial Perturbations for Malware

Abstract: Machine learning classification models are vulnerable to adversarial examples-effective input-specific perturbations that can manipulate the model's output. Universal Adversarial Perturbations (UAPs), which identify noisy patterns that generalize across the input space, allow the attacker to greatly scale up the generation of these adversarial examples. Although UAPs have been explored in application domains beyond computer vision, little is known about their properties and implications in the specific context… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 42 publications
0
6
0
Order By: Relevance
“…Universal Adversarial Perturbation (UAP) is one special type of adversarial attacks in which a same single perturbation can be applied over a large set of inputs for misclassifying the target model in the testing phrase. In order to generate the problemspace UAP against PE malware classifiers in the wild, Labaca-Castro et al [73] first prepare a set of available transformations (e.g., adding sections to the PE file, renaming sections, pack/unpacking, etc.) for Windows PE files, and then perform a greedy search approach to identify a short sequence of transformations as the UAP for the PE malware classifier.…”
Section: Discussionmentioning
confidence: 99%
“…Universal Adversarial Perturbation (UAP) is one special type of adversarial attacks in which a same single perturbation can be applied over a large set of inputs for misclassifying the target model in the testing phrase. In order to generate the problemspace UAP against PE malware classifiers in the wild, Labaca-Castro et al [73] first prepare a set of available transformations (e.g., adding sections to the PE file, renaming sections, pack/unpacking, etc.) for Windows PE files, and then perform a greedy search approach to identify a short sequence of transformations as the UAP for the PE malware classifier.…”
Section: Discussionmentioning
confidence: 99%
“…We summarise the attacks in Table 5. L-BFGS [77] GradientDescent [78] Adversarial Sequences [79] JSMA [62] Deepfool [80] AddSent,AddOneSent [81] GAN [82] EnchantingAttack [82] StrategicAttack [64] C&W, L 0 , L 2 , L ∞ [83] FGSM,JSMA [84] Generative RNN [85] NPBO [86] GADGET [87] JSMA,FGSM,DeepFool,CW [88] FGSM [89] IDS-GAN [90] ZOO,GAN [91] One Pixel Attack [92] ManifoldApproximation [93] FGSM,BIM,PGD [94] GAN Attack [95] PWPSA [95] GA [96] One Pixel Attack [97] Opt Attack,GAN Attack [98] GAMMA [99] UAP [100] Variational Auto Encoder [101] Best-Effort Search…”
Section: Classification Schemementioning
confidence: 99%
“…Biggio et al [77] highlighted the threat from skilled adversaries with limited knowledge; more recently, gray-box attacks have received some attention: Kuppa et al [92] considered malicious users of the system with knowledge of the features and architecture of the system, recognizing that attackers may differ in their level of knowledge of the system. Labaca-Castro et al [99] used universal adversarial perturbations, showing that unprotected systems remain vulnerable even under limited knowledge scenarios. Li et al [101] considered limited knowledge attacks against cyber physical systems and successfully deployed universal adversarial perturbations where attackers have incomplete knowledge of measurements across all sensors.…”
Section: Adversarial Examples-types Of Attackmentioning
confidence: 99%
“…These attacks can take the form of adversarial patches and objects for image classification [2], person recognition [26], camera-based [7,8] and LiDAR-based object detection [3,9,10,28]. In the digital domain, UAPs have been shown to facilitate realistic attacks on perceptual ad-blockers for web pages [27] and machine learning-based malware detectors [16]. Furthermore, an attacker can utilize UAPs to perform query-efficient black-box attacks on neural networks [4,6].…”
Section: Introductionmentioning
confidence: 99%