2020
DOI: 10.48550/arxiv.2006.06861
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robustness to Adversarial Attacks in Learning-Enabled Controllers

Abstract: Learning-enabled controllers used in cyber-physical systems (CPS) are known to be susceptible to adversarial attacks. Such attacks manifest as perturbations to the states generated by the controller's environment in response to its actions. We consider state perturbations that encompass a wide variety of adversarial attacks and describe an attack scheme for discovering adversarial states. To be useful, these attacks need to be natural, yielding states in which the controller can be reasonably expected to gener… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The second category of explainable transition and reward models leverages understandable models of the task or environment, e.g., a transition model (Martínez et al, 2016;Zhu et al, 2020) or a preference model (Icarte et al, 2018;Icarte et al, 2019). Such models help explain both the RL agent's reasoning about its decision-making and humans' understanding of the decision-making process.…”
Section: Interpretable/explainable Decision-making Processes Of Rlmentioning
confidence: 99%
See 1 more Smart Citation
“…The second category of explainable transition and reward models leverages understandable models of the task or environment, e.g., a transition model (Martínez et al, 2016;Zhu et al, 2020) or a preference model (Icarte et al, 2018;Icarte et al, 2019). Such models help explain both the RL agent's reasoning about its decision-making and humans' understanding of the decision-making process.…”
Section: Interpretable/explainable Decision-making Processes Of Rlmentioning
confidence: 99%
“…However, while testing the model is an essential part of the training loop, it should not be the only component to ensure safe operation. An example of how safety can be ensured is presented by Xiong et al (2020), who propose using shield-based defenses, where agents learn to stay within predefined, safe boundaries during training and application, and with that increase robustness.…”
Section: Focus: Safety Evaluationmentioning
confidence: 99%
“…Clark et al [113] also investigated the impact of adversarial attacks on the ML policies for controlling a robotic system. Finally, Xiong et al in [114] proposed attacks and defenses against ML algorithms used in learning-enabled controllers.…”
Section: Adversarial Machine Learning (Aml) and Cpsmentioning
confidence: 99%
“…We highlight that our method can handle any other open-loop or feedback controllers that are not necessarily modeled as NNs. In this paper, we consider NN controllers due to their brittleness to small input perturbations [40], [41] and their lack of safe generalization to unseen tasks.…”
Section: Introductionmentioning
confidence: 99%