2022
DOI: 10.48550/arxiv.2202.08185
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Adversarial Security Mitigations of mmWave Beamforming Prediction Models using Defensive Distillation and Adversarial Retraining

Abstract: The design of a security scheme for beamforming prediction is critical for next-generation wireless networks (5G, 6G, and beyond). However, there is no consensus about protecting the beamforming prediction using deep learning algorithms in these networks. This paper presents the security vulnerabilities in deep learning for beamforming prediction using deep neural networks (DNNs) in 6G wireless networks, which treats the beamforming prediction as a multi-output regression problem. It is indicated that the init… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…However, DL-based solutions are vulnerable to adversarial attacks. With these vulnerabilities in mind, the authors of [189] study four different types of adversarial attacks and propose two methods of counterattacking them: adversarial training and defensive distillation. Their results reveal that the proposed methods effectively defend the DL models against the studied adversarial attacks.…”
Section: Security Of Ai Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, DL-based solutions are vulnerable to adversarial attacks. With these vulnerabilities in mind, the authors of [189] study four different types of adversarial attacks and propose two methods of counterattacking them: adversarial training and defensive distillation. Their results reveal that the proposed methods effectively defend the DL models against the studied adversarial attacks.…”
Section: Security Of Ai Modelsmentioning
confidence: 99%
“…Unfortunately, the study of how adversarial attacks can affect the performance of systems deploying ML-assisted beamforming is still in its infancy, requiring much more attention as it poses high risks to such systems. However, a few works are already available in the literature discussing such issues [189].…”
Section: Privacy and Securitymentioning
confidence: 99%
“…However, DL-based solutions are vulnerable to adversarial attacks. With these vulnerabilities in mind, the authors of [184] study four different types of adversarial attacks and propose two methods of counterattacking them: adversarial training and defensive distillation. Their results reveal that the proposed methods effectively defend the DL models against the studied adversarial attacks.…”
Section: DLmentioning
confidence: 99%
“…Unfortunately, the study of how adversarial attacks can affect the performance of systems deploying ML-assisted beamforming is still in its infancy, requiring much more attention as it poses high risks to such systems. However, a few works are already available in the literature discussing such issues [184].…”
Section: Privacy and Securitymentioning
confidence: 99%