2020
DOI: 10.48550/arxiv.2003.05631
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(16 citation statements)
references
References 43 publications
0
16
0
Order By: Relevance
“…For example, if the vanilla attacker in Section VI-A sets the factor α to a small value, the resulted false measurements will have a high probability to bypass the DNN's detection. In this paper, we note the attacker can generate his/her adversarial perturbations based on multiple padding cases to increase their transferability and finally bypass the random padding system, as used in [57] and [20]. However, generating transferable perturbations will affect the attack performance, such as the valid L 2 -Norm, and increase labor and resources of the attacker [20].…”
Section: Discussion and Future Workmentioning
confidence: 99%
See 3 more Smart Citations
“…For example, if the vanilla attacker in Section VI-A sets the factor α to a small value, the resulted false measurements will have a high probability to bypass the DNN's detection. In this paper, we note the attacker can generate his/her adversarial perturbations based on multiple padding cases to increase their transferability and finally bypass the random padding system, as used in [57] and [20]. However, generating transferable perturbations will affect the attack performance, such as the valid L 2 -Norm, and increase labor and resources of the attacker [20].…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Tian et al extended [16] and proposed an adaptive normalized attack for power system ML applications [18]. [20] proposed constrained adversarial machine learning in CPS applications, and demonstrated an attacker can generate adversarial examples that meet the intrinsic constraints defined by physical systems, such as the residual-based detection in state estimation.…”
Section: B Adversarial Attacksmentioning
confidence: 99%
See 2 more Smart Citations
“…By adding well-crafted perturbations to the legitimate inputs, the attacker is able to deceive the well-trained ML models to output the wrong classification results. In addition, the adversarial attacks are also shown to be effective in power systems applications [14]- [16]. As the ML approaches become popular in detecting energy theft, the threat from adversarial attacks need to be investigated to prevent potential financial losses.…”
Section: Introductionmentioning
confidence: 99%