2021
DOI: 10.1007/978-3-030-91431-8_57
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Security of Machine Learning Based IoT Device Identification Systems Against Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…In Algorithm 2, we provide the pseudo-code for the Two-sample Kolmogorov-Smirnov goodness of fit test (KS-test) (Namvar et al, 2021) applied in Section 5.3. We set the significant level α to 0.05, which is the most commonly used value (Chen et al, 2019) and recommended as a standard level (Fisher, 1955).…”
Section: Declarationsmentioning
confidence: 99%
See 1 more Smart Citation
“…In Algorithm 2, we provide the pseudo-code for the Two-sample Kolmogorov-Smirnov goodness of fit test (KS-test) (Namvar et al, 2021) applied in Section 5.3. We set the significant level α to 0.05, which is the most commonly used value (Chen et al, 2019) and recommended as a standard level (Fisher, 1955).…”
Section: Declarationsmentioning
confidence: 99%
“…In this section, we propose a statistical analysis to compare the stealthiness of adversarial attacks generated by QVCs and classical CNNs. We adopt the assumption from Namvar et al (2021) to our scenario that an attack is imperceptible if we detect at least one modulation class that has a similar data distribution of radio signal data as that of the adversarially crafted radio signal data. For this purpose, we apply the Two-sample Kolmogorov-Smirnov goodness of fit test (KS-test).…”
Section: Data Stealthinessmentioning
confidence: 99%
“…In [31], the effects of different non-targeted and targeted adversarial attacks (such as FGSM, BIM, PGD, and MIM) were investigated on a CNN used for radiofrequency-based individual device identification. Similarly, in [32], the resilience of network-based IoT identification ML models was assessed against adversarial samples generated using FGSM, BIM, and JSMA. The results showed that classifier models with more than 90% accuracy experienced a performance drop to 75-55% when exposed to maliciously crafted samples.…”
Section: Ml/dl-focused Adversarial Attacksmentioning
confidence: 99%