2019
DOI: 10.1007/978-3-030-20652-9_22
|View full text |Cite
|
Sign up to set email alerts
|

Formal Methods Assisted Training of Safe Reinforcement Learning Agents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…By contrast, RTA can be used to filter the actions of the RL algorithms to ensure safety. A variety of RTA approaches have been explored in the literature including human-like intervention [184], Lyapunov-based approaches [185], [186], barrier functions [187], and formal verification of safety constraints [188], [189]. Determining the appropriate way to include RTA in the training process is still an area of active research.…”
Section: Sidebar: Shielded Learningmentioning
confidence: 99%
“…By contrast, RTA can be used to filter the actions of the RL algorithms to ensure safety. A variety of RTA approaches have been explored in the literature including human-like intervention [184], Lyapunov-based approaches [185], [186], barrier functions [187], and formal verification of safety constraints [188], [189]. Determining the appropriate way to include RTA in the training process is still an area of active research.…”
Section: Sidebar: Shielded Learningmentioning
confidence: 99%
“…By contrast, RTA can be used to filter the actions of the RL algorithms to ensure safety. A variety of RTA approaches have been explored in the literature including human-like intervention [184], Lyapunov-based approaches [185], [186], barrier functions [187], and formal verification of safety constraints [188], [189]. Determining the appropriate way to include RTA in the training process is still an area of active research.…”
Section: Sidebar: Shielded Learningmentioning
confidence: 99%