2019 UK/ China Emerging Technologies (UCET) 2019
DOI: 10.1109/ucet.2019.8881843
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Machine Learning Attack on Modulation Classification

Abstract: Modulation classification is an important component of cognitive self-driving networks. Recently many ML-based modulation classification methods have been proposed. We have evaluated the robustness of 9 ML-based modulation classifiers against the powerful Carlini & Wagner (C-W) attack and showed that the current ML-based modulation classifiers do not provide any deterrence against adversarial ML examples. To the best of our knowledge, we are the first to report the results of the application of the C-W attack … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…While FGSM is a computationally cheap method for creating adversarial examples, the large body of literature in adversarial ML for CV has yielded algorithms that can evade classifiers with even smaller perturbations. In [88], a more sophisticated adversarial methodology was used to carry out an attack on AMC [99]. Not only was this attack successful for a DNN, but, when the adversarial examples were input to classifiers not based on DNNs (i.e.…”
Section: ) Untargeted Digital Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…While FGSM is a computationally cheap method for creating adversarial examples, the large body of literature in adversarial ML for CV has yielded algorithms that can evade classifiers with even smaller perturbations. In [88], a more sophisticated adversarial methodology was used to carry out an attack on AMC [99]. Not only was this attack successful for a DNN, but, when the adversarial examples were input to classifiers not based on DNNs (i.e.…”
Section: ) Untargeted Digital Attacksmentioning
confidence: 99%
“…Therefore, both the transmission and perturbation are impacted by channel effects, hardware impairments at both the transmitter and receiver, and DSP pre-preprocessing techniques used before reaching the DNN for classification (a physical attack in Figure 4). All of these can serve as an impediment for an attacker, forcing them to raise their adversarial perturbation power [85], [86], [88], [114]. Additionally, so-called white-box attacks, which assume full knowledge of the target DNN, are generally known to be more effective than black-box attacks which assume close to zero knowledge about the target, regardless of modality.…”
Section: ) Becoming Robust To Attacksmentioning
confidence: 99%
“…Similar to other DNN-based applications, DNN-based wireless applications are susceptible to adversarial attacks [1]- [4], [6], [7], [12], [14], [15], [31]- [34], [36]. Flowers et al [7] use the FGSM method to evaluate the vulnerabilities of the raw in-phase and quadrant (IQ) based automatic modulation classification task.…”
Section: B Adversarial Examplesmentioning
confidence: 99%
“…Discussions of Adversarial ML [161], [162] date back at least 15 years [163]- [165] and have broadened to include exploratory attacks that seek to learn information about (or replicate) the classifier [166] or training data [167] through limited probes on the model to observe it's input/output relationship. However, the most recent explosion in concern for the vulnerabilities of DNNs specifically is largely credited to the Fast Gradient Sign Method (FGSM) [133] [153], [154] [155]- [159] Figure 3: Threat Model for RFML adopted from [156], [160] and including related work.…”
Section: A Adversarial Machine Learningmentioning
confidence: 99%
“…While FGSM is a computationally cheap method for creating adversarial examples, the large body of literature in adversarial ML for CV has yielded algorithms that can evade classifiers with even smaller perturbations. In [159], a more sophisticated adversarial methodology [169] was used to carry out an attack on AMC; not only was this attack successful for a DNN, but, when the adversarial examples were input to classifiers not based on DNNs (e.g. Support Vector Machine (SVM), Decision Trees, Random Forests, etc.)…”
Section: A Adversarial Machine Learningmentioning
confidence: 99%