2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP) 2019
DOI: 10.1109/globalsip45357.2019.8969138
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Examples in RF Deep Learning: Detection and Physical Robustness

Abstract: While research on adversarial examples in machine learning for images has been prolific, similar attacks on deep learning (DL) for radio frequency (RF) signals and their mitigation strategies are scarcely addressed in the published work, with only one recent publication in the RF domain [1]. RF adversarial examples (AdExs) can cause drastic, targeted misclassification results mostly in spectrum sensing/ survey applications (e.g. BPSK mistaken for OFDM) with minimal waveform perturbation. It is not clear if the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
41
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(43 citation statements)
references
References 17 publications
1
41
0
1
Order By: Relevance
“…These attacks can be launched separately or combined, i.e., causative and evasion attacks can be launched by making use of the inference results from an exploratory attack [23]. For wireless applications, the evasion attack was considered in [24], [25], [26], [27] by adding adversarial perturbations to fool receivers to misclassify signal types (such as modulations). Adversarial distortions were considered in [28] to support anti-jamming by deceiving the jammers learning algorithms in a game-theoretic framework.…”
Section: Related Workmentioning
confidence: 99%
“…These attacks can be launched separately or combined, i.e., causative and evasion attacks can be launched by making use of the inference results from an exploratory attack [23]. For wireless applications, the evasion attack was considered in [24], [25], [26], [27] by adding adversarial perturbations to fool receivers to misclassify signal types (such as modulations). Adversarial distortions were considered in [28] to support anti-jamming by deceiving the jammers learning algorithms in a game-theoretic framework.…”
Section: Related Workmentioning
confidence: 99%
“…As an extension to the wireless domain, adversarial ML has been applied to infer the transmit behavior driven by ML and jam the test and/or training phases [7]. Evasion attacks on modulation classification have been studied in [18]- [20] that use the fast gradient sign method (FGSM) to craft adversarial perturbations (see [21] for details) that an adversary can make the receiver misclassify a received signal in the form of an evasion attack. Similarly, [20] considers the same evasion attack model and proposes to utilize a statistical method based on the peak-to-average-power ratio (PAPR) of the signals.…”
Section: Related Workmentioning
confidence: 99%
“…Evasion attacks on modulation classification have been studied in [18]- [20] that use the fast gradient sign method (FGSM) to craft adversarial perturbations (see [21] for details) that an adversary can make the receiver misclassify a received signal in the form of an evasion attack. Similarly, [20] considers the same evasion attack model and proposes to utilize a statistical method based on the peak-to-average-power ratio (PAPR) of the signals. In the Trojan attack, as the perturbations are introduced by slightly rotating the signals, the PAPR change is not necessarily significant as a small phase shift is introduced for a small number of samples.…”
Section: Related Workmentioning
confidence: 99%
“…Hence, the outputs of the Softmax layer are 3-dimensional vectors that are plotted in the figures for all data points utilized for training (close to 20,000, shown on the left), and for their adversarial examples, shown in the plot on the right. The elements of the vectors are values between 0 and 1, representing the probabilities of the classes (2). Figure 11 shows these vectors after 40 epochs of training the CNN network conventionally, which is upon the convergence of the loss function and after the achieved accuracy exceeded 99%.…”
Section: A How the Ae Changes The Separating Hyper-planesmentioning
confidence: 99%