2020
DOI: 10.3390/su12166434
|View full text |Cite
|
Sign up to set email alerts
|

Robustness Evaluations of Sustainable Machine Learning Models against Data Poisoning Attacks in the Internet of Things

Abstract: With the increasing popularity of the Internet of Things (IoT) platforms, the cyber security of these platforms is a highly active area of research. One key technology underpinning smart IoT systems is machine learning, which classifies and predicts events from large-scale data in IoT networks. Machine learning is susceptible to cyber attacks, particularly data poisoning attacks that inject false data when training machine learning models. Data poisoning attacks degrade the performances of machine learning mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 54 publications
(21 citation statements)
references
References 35 publications
2
19
0
Order By: Relevance
“…The attacks mentioned above and the methods to defend against them have been studied extensively in recent years (Dunn et al 2020;Juuti et al 2019). The researchers have proved that they are precise and have excellent transferability, which has resulted in errors during the predictions of any AI model.…”
Section: Considering the Xai For Ai Securitymentioning
confidence: 99%
“…The attacks mentioned above and the methods to defend against them have been studied extensively in recent years (Dunn et al 2020;Juuti et al 2019). The researchers have proved that they are precise and have excellent transferability, which has resulted in errors during the predictions of any AI model.…”
Section: Considering the Xai For Ai Securitymentioning
confidence: 99%
“…• Poisoning attacks: In these attack, adversaries aim to share manipulated labels such that the ML model would consider the re-training process to degrade the performance of the implemented model (assuming that adversaries either have complete control over the training dataset or can contribute to the training dataset [168]. • Evasion attacks: Unlike poisoning attacks, evasion attacks may occur after the model training phase, where the adversary may not know about the necessary data manipulation to attack the ML model.…”
Section: Adversarial MLmentioning
confidence: 99%
“…evasion attack vs. poisoning attack), and application scenarios, the suitable metrics may vary. For instance, in [32] the metrics Accuracy, Precision, False Positive, and True Positive were used to measure the negative impacts of the poisoning attacks to four ML models' integrity, including gradient boosted machines, random forests, naive Bayes statistical classifiers, and feed forward deep learning models, in the application of IoT environments; in [110], the evaluation was focused on the Recall metric to study the robustness of a Deep Learning-based network traffic classifier by applying several Universal Adversarial Perturbation based attacks against various traffic types including chat, email, file transfer, streaming, torrent, and VoIP; in [9], several metrics including Area Under the Curve (AUC, a type of false positive metric), Genuine Acceptance Rate (GAR) and False Acceptance Rate (FAR) were proposed to evaluate pattern classifiers used in adversarial applications like biometric authentication, network intrusion detection, and spam filtering.…”
Section: Quantitative Analysismentioning
confidence: 99%