2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN) 2019
DOI: 10.1109/dyspan.2019.8935782
|View full text |Cite
|
Sign up to set email alerts
|

Trojan Attacks on Wireless Signal Classification with Adversarial Machine Learning

Abstract: We present a Trojan (backdoor or trapdoor) attack that targets deep learning applications in wireless communications. A deep learning classifier is considered to classify wireless signals using raw (I/Q) samples as features and modulation types as labels. An adversary slightly manipulates training data by inserting Trojans (i.e., triggers) to only few training data samples by modifying their phases and changing the labels of these samples to a target label. This poisoned training data is used to train the deep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
40
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 82 publications
(42 citation statements)
references
References 28 publications
0
40
0
2
Order By: Relevance
“…Another key research direction going forward is identifying the type of information being transmitted from an RFML-enabled device to increase attack effectiveness through targeted attacks on acknowledgement messages or transmission decisions, for example [117], [120]. Further, recent work in determining how the training data of RFML systems can be manipulated to cause a degradation in model performance [80], [121] motivates the study of data cleaning methodologies for RFML. Such concerns echo the discussion in Section IV surrounding the need for transparency regarding the generation and metadata parameters for publicly available RFML datasets, as well as validating said datasets before use.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Another key research direction going forward is identifying the type of information being transmitted from an RFML-enabled device to increase attack effectiveness through targeted attacks on acknowledgement messages or transmission decisions, for example [117], [120]. Further, recent work in determining how the training data of RFML systems can be manipulated to cause a degradation in model performance [80], [121] motivates the study of data cleaning methodologies for RFML. Such concerns echo the discussion in Section IV surrounding the need for transparency regarding the generation and metadata parameters for publicly available RFML datasets, as well as validating said datasets before use.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Outlier detection mechanisms, based on activation, can be combined with clustering and statistical techniques to detect this attack. The clustering technique can detect the attacks despite a few samples being poisoned, and hence, it can detect trojan attacks [31].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Researchers in [32][33][34] investigated the attacks 802.11 networks. Similarly, authors in [35][36][37][38] and [7][8][9][10][11], [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39] have investigated the attacks in sensor networks, multi-hop networks, and other network models respectively. Sadeghi and Larsson [40] have described how blackbox adversarial attacks are carried out and their impact on the transmission system's block error rate.…”
Section: Literature Reviewmentioning
confidence: 99%
“…AML attacks have been applied to the wireless domain [16]- [18]. These attacks include inference (exploratory) attacks [19]- [21], evasion (adversarial) attacks [22]- [42], poisoning (causative) attacks [43]- [47], Trojan attacks [48], spoofing attacks [49]- [52], membership inference attacks [53], [54], and attacks to facilitate covert communications [55]- [57]. FL itself is vulnerable to various insider exploits such as data poisoning (a malicious client may manipulate its training data, including both labels or features), model update poisoning (a malicious client may send manipulated models to the server), free-riding attack (a malicious client may claim little to none training data to contribute to FL while receiving the global model without contributing much from its local model), and inference of class representatives, memberships, and training inputs and labels [6], [58]- [60].…”
Section: Introductionmentioning
confidence: 99%