2018 IEEE 43rd Conference on Local Computer Networks Workshops (LCN Workshops) 2018
DOI: 10.1109/lcnw.2018.8628538
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Cognitive Self-Organizing Networks: The Challenge and the Way Forward

Abstract: Future communications and data networks are expected to be largely cognitive self-organizing networks (CSON). Such networks will have the essential property of cognitive selforganization, which can be achieved using machine learning techniques (e.g., deep learning). Despite the potential of these techniques, these techniques in their current form are vulnerable to adversarial attacks that can cause cascaded damages with detrimental consequences for the whole network. In this paper, we explore the effect of adv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

3
5

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 44 publications
0
15
0
Order By: Relevance
“…It aims at assessing the security robustness of ML algorithms against attacks and designing appropriate countermeasures. While AML has attracted much interest in vision field, only very few contributions (e.g., [15], [16]) have addressed ML security in the context of service and network management. Usama et al [15] highlight the importance of tackling adversarial attacks against cognitive selforganizing networks.…”
Section: E Adversarial ML For Zsmmentioning
confidence: 99%
See 1 more Smart Citation
“…It aims at assessing the security robustness of ML algorithms against attacks and designing appropriate countermeasures. While AML has attracted much interest in vision field, only very few contributions (e.g., [15], [16]) have addressed ML security in the context of service and network management. Usama et al [15] highlight the importance of tackling adversarial attacks against cognitive selforganizing networks.…”
Section: E Adversarial ML For Zsmmentioning
confidence: 99%
“…While AML has attracted much interest in vision field, only very few contributions (e.g., [15], [16]) have addressed ML security in the context of service and network management. Usama et al [15] highlight the importance of tackling adversarial attacks against cognitive selforganizing networks. As a proof of concept, white-box evasion attacks against Convolutional Neural Network have been designed to show how a malware classifier can be evaded.…”
Section: E Adversarial ML For Zsmmentioning
confidence: 99%
“…The privacy attacks aim to obtain private information about the system, its users or data by reverse-engineering the learning algorithm. The attacks against ML can be also divided into two categories based on the attacker's knowledge [12], namely: (i) White-box attacks, which assume that the attacker has complete knowledge about the training data, the algorithm and its hyper-parameters. (ii) Black-box attacks, which assume that the attacker has no knowledge about the algorithm and its hyper-parameters.…”
Section: Ai/ml-based Attacksmentioning
confidence: 99%
“…To mitigate model inversion and model extraction attacks, various solutions have been proposed, ranging from restricting information provided by ML APIs, adding noise to the ML predictions, to adding noise to execution time of the ML model. While AML has attracted much interest in computer vision field, only very few contributions (e.g., [12], [11]) have addressed ML security in the context of service and network management. Usama et al [12] highlight the importance of tackling adversarial attacks against cognitive self-organizing networks.…”
Section: Ai/ml Securitymentioning
confidence: 99%
“…This method of generating adversarial ML attacks is called the fast gradient sign method (FGSM). Kurakin et al [9] proposed the basic iterative method (BIM) attack which improves the FGSM attack by introducing an iterative small perturbation optimization method for generating adversarial examples. Papernot et al [3] proposed a targeted saliency map based attack, where saliency map is used in an iterative manner to find the most significant features of the input that when fractionally perturbed cause DNNs to misclassify, this adversarial attack is known as Jacobian saliency map based attack (JSMA).…”
Section: B Taxonomy Of Security Attacks On Machine Learningmentioning
confidence: 99%