2020
DOI: 10.1109/mnet.001.1900197
|View full text |Cite
|
Sign up to set email alerts
|

The Adversarial Machine Learning Conundrum: Can the Insecurity of ML Become the Achilles' Heel of Cognitive Networks?

Abstract: The holy grail of networking is to create cognitive networks that organize, manage, and drive themselves. Such a vision now seems attainable thanks in large part to the progress in the field of machine learning (ML), which has now already disrupted a number of industries and revolutionized practically all fields of research. But are the ML models foolproof and robust to security attacks to be in charge of managing the network? Unfortunately, many modern ML models are easily misled by simple and easily-crafted … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 15 publications
0
13
0
Order By: Relevance
“…2) Adversarial Machine Learning (ML): Adversarial attacks are the result of recent efforts for identifying vulnerabilities in ML/DL models training and inference. Adversarial attacks have appeared as one of the biggest security threats to ML/DL systems [20], [98], [99], [100], [101]. In adversarial attacks, the key goal of an adversary is to generate adversarial examples by adding small carefully crafted (unnoticeable) perturbation into the actual (non-modified) input samples to evade the integrity of the ML/DL system.…”
Section: B the Security Of Ml: An Overviewmentioning
confidence: 99%
“…2) Adversarial Machine Learning (ML): Adversarial attacks are the result of recent efforts for identifying vulnerabilities in ML/DL models training and inference. Adversarial attacks have appeared as one of the biggest security threats to ML/DL systems [20], [98], [99], [100], [101]. In adversarial attacks, the key goal of an adversary is to generate adversarial examples by adding small carefully crafted (unnoticeable) perturbation into the actual (non-modified) input samples to evade the integrity of the ML/DL system.…”
Section: B the Security Of Ml: An Overviewmentioning
confidence: 99%
“…2) Evaluating a defense: In the following, we have provided a few important evaluation guidelines for evaluating the ML-based 5G applications against adversarial ML attacks. These insights are extracted from the Carlini et al [19] and our previous works [6], [20].…”
Section: Attacking Reinforcement Ml-based 5g Applicationsmentioning
confidence: 99%
“…Both of the aforementioned outsourcing strategies come with new security concerns. In addition, the literature suggests that different types of attacks can be realized on different components of the communication network as well ( Usama et al, 2020a ), for example, intrusion detection ( Han et al, 2020 ; Usama et al, 2020b ), network traffic classification ( Usama et al, 2019 ), and malware detection systems ( Chen et al, 2018 ). Moreover, adversarial ML attacks have also been devised for client-side ML classifiers, that is, Google’s phishing pages filter ( Liang et al, 2016 ).…”
Section: Introductionmentioning
confidence: 99%