2021
DOI: 10.1016/j.nahs.2021.101110
|View full text |Cite
|
Sign up to set email alerts
|

Safe-visor architecture for sandboxing (AI-based) unverified controllers in stochastic cyber–physical systems

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…Cheng et al [3] use action projection and train a second model on the previous interventions to reduce the need for future interventions. Zhong et al [15] derive a safe-visor that rejects infeasible actions proposed by the agent and replaces it with a safe action.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Cheng et al [3] use action projection and train a second model on the previous interventions to reduce the need for future interventions. Zhong et al [15] derive a safe-visor that rejects infeasible actions proposed by the agent and replaces it with a safe action.…”
Section: Related Workmentioning
confidence: 99%
“…Using the actions a i as support of the KDE in (12), the densities qθ,σ (a * j ) and qθ,σ ′ (a * j ) are computed. Then the feasibility model g is evaluated on all samples a * j and the estimate of p(a * j ) is computed using (5) and importance sampling in (15). Finally, the gradient of θ can be computed according to (14).…”
Section: Training Processmentioning
confidence: 99%
“…Explainable AI (XAI) focuses upon improving the transparency of AI decision-making processes, to provide clarity and justification to actions such as those that result in undesirable behaviour [4,5]. Publications in AI Safety include pragmatic approaches for harm avoidance and self-supervisory wrapper systems [6,7] as well as social approaches including exploration of legal regulation [8]. Recent work in Impact Minimisation (IM) seeks to generalise and penalise against any impactful behaviours that are not explicitly aligned with the agent's primary objective [9,10].…”
Section: Introductionmentioning
confidence: 99%
“…Various formal verification and synthesis techniques have been investigated to ensure safety in CPS [4][5][6][7]. Abstraction-based methods have gained significant popularity in the last two decades for safety analysis of CPS [6][7][8][9][10]. These methods approximate original systems with continuous state and input sets by their finite abstractions, constructed by discretizing the original sets.…”
Section: Introductionmentioning
confidence: 99%