IEEE INFOCOM 2020 - IEEE Conference on Computer Communications 2020
DOI: 10.1109/infocom41043.2020.9155389
|View full text |Cite
|
Sign up to set email alerts
|

Threats of Adversarial Attacks in DNN-Based Modulation Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
39
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 94 publications
(41 citation statements)
references
References 21 publications
1
39
0
1
Order By: Relevance
“…Here we discuss different types of white-box attacks which are studied in the literature [12], [25]. These attack models are used in our experiments.…”
Section: Attack Modelsmentioning
confidence: 99%
“…Here we discuss different types of white-box attacks which are studied in the literature [12], [25]. These attack models are used in our experiments.…”
Section: Attack Modelsmentioning
confidence: 99%
“…is type of deep learning can adapt with any improvement to the hidden layers during training and the training going through backpropagation algorithm. Since DNN is good with scalable data in prediction model that uses complex data, it is considered suitable for education deep learning prediction [10,11].…”
Section: Deep Neural Network (Dnn)mentioning
confidence: 99%
“…Hence, the deep learning (DL) system wrongly uses these inputs to classify input signals [10,11]. These wrong classifications of signals are not "common white noise" but a distinct attribute in the feature space that leads to incorrect model outputs [12][13][14][15][16][17][18][19][20][21][22][23].…”
Section: Literature Reviewmentioning
confidence: 99%