2018 IEEE 16th International Conference on Industrial Informatics (INDIN) 2018
DOI: 10.1109/indin.2018.8472060
|View full text |Cite
|
Sign up to set email alerts
|

Generation of Adversarial Examples to Prevent Misclassification of Deep Neural Network based Condition Monitoring Systems for Cyber-Physical Production Systems

Abstract: Deep neural network based condition monitoring systems are used to detect system failures of cyber-physical production systems. However, a vulnerability of deep neural networks are adversarial examples. They are manipulated inputs, e.g. process data, with the ability to mislead a deep neural network into misclassification. Adversarial example attacks can manipulate the physical production process of a cyber-physical production system without being recognized by the condition monitoring system. Manipulation of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 17 publications
(18 reference statements)
0
7
0
Order By: Relevance
“…Examples of works in the field of cybersecurity where an attackagnostic method has been employed include the development of an Android malware classifier that is robust to poisoning attacks 149 , evaluation of multiple attack-agnostic robustness defense methods including an ensemble of classifiers 150 , and detection of adversarial attacks for the defense of cyber-physical systems 151 . Similar to this, an attack-specific robustness defense strategy has been used in adversarial retraining against RNN classifiers for spam detection 152 and anomaly detection system of sensor data 153 . As attack-agnostic defense strategies are more general, these should be preferred and the focus of further research to solve real-world issues in the context of cybersecurity.…”
Section: Adversarial Defense Methodsmentioning
confidence: 99%
“…Examples of works in the field of cybersecurity where an attackagnostic method has been employed include the development of an Android malware classifier that is robust to poisoning attacks 149 , evaluation of multiple attack-agnostic robustness defense methods including an ensemble of classifiers 150 , and detection of adversarial attacks for the defense of cyber-physical systems 151 . Similar to this, an attack-specific robustness defense strategy has been used in adversarial retraining against RNN classifiers for spam detection 152 and anomaly detection system of sensor data 153 . As attack-agnostic defense strategies are more general, these should be preferred and the focus of further research to solve real-world issues in the context of cybersecurity.…”
Section: Adversarial Defense Methodsmentioning
confidence: 99%
“…While no direct consideration of adversarial vulnerability for SHM has yet been presented in the literature, the susceptibility of data-driven approaches has already been demonstrated for the related field of process monitoring. 20 In their paper, the authors demonstrate the adversarial fragility of a deep neural network trained to detect system failures and offer an adversarial training method similar to Madry et al 21 for hardening the classifier.…”
Section: Adversarial Attacks On Pattern Recognition Modelsmentioning
confidence: 97%
“…Specht et al [118] trained a fully connected DNN on the SECOM dataset, recorded from a semi-conductor manufacturing process, which consists of 590 attributes collected from sensor signals and variables during manufacturing cycles.…”
Section: Cyber-physical Systems and Industrial Control Systemsmentioning
confidence: 99%
“…Rosenberg et al [103] tried to defend an API call-based RNN classifier and compared their own RNN defense method, sequence squeezing, to five other defense methods inspired by existing CNN-based defense methods: adversarial retraining, statistical anomalous subsequences, defense GAN, nearest neighbor classification, and RNN ensembles. They showed that sequence squeezing provides the best trade-off between training and inference overhead (which is less critical in the computer vision domain) and the adversarial robustness.Specht et al[118] suggested an iterative adversarial retraining process to mitigate adversarial examples for semiconductor anomaly detection of sensor data. Soleymani et al[116] used wavelet domain denoising of the iris samples by investigating each wavelet sub-band and removing the sub-bands that are most affected by the adversary.…”
mentioning
confidence: 99%