2022
DOI: 10.1007/s10994-022-06143-6
|View full text |Cite
|
Sign up to set email alerts
|

Planning for potential: efficient safe reinforcement learning

Abstract: Deep reinforcement learning (DRL) has shown remarkable success in artificial domains and in some real-world applications. However, substantial challenges remain such as learning efficiently under safety constraints. Adherence to safety constraints is a hard requirement in many high-impact application domains such as healthcare and finance. These constraints are preferably represented symbolically to ensure clear semantics at a suitable level of abstraction. Existing approaches to safe DRL assume that being uns… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 24 publications
0
1
0
Order By: Relevance
“…Here, risk-sensitive DRL can be employed instead of regular DRL [8]. Additionally, organizational constraints can be formalized and used within approaches that guarantee safety of the resulting policy [12]. We believe that, with the proposed approach, these challenging and interesting research directions that will further increase the impact of SWP have become feasible in practice.…”
Section: Discussionmentioning
confidence: 99%
“…Here, risk-sensitive DRL can be employed instead of regular DRL [8]. Additionally, organizational constraints can be formalized and used within approaches that guarantee safety of the resulting policy [12]. We believe that, with the proposed approach, these challenging and interesting research directions that will further increase the impact of SWP have become feasible in practice.…”
Section: Discussionmentioning
confidence: 99%