Proceedings of the 19th ACM-IEEE International Conference on Formal Methods and Models for System Design 2021
DOI: 10.1145/3487212.3487339
|View full text |Cite
|
Sign up to set email alerts
|

Learning optimal decisions for stochastic hybrid systems

Abstract: We apply reinforcement learning to approximate the optimal probability that a stochastic hybrid system satisfies a temporal logic formula. We consider systems with (non)linear continuous dynamics, random events following general continuous probability distributions, and discrete nondeterministic choices. We present a discretized view of states to the learner, but simulate the continuous system. Once we have learned a near-optimal scheduler resolving the choices, we use statistical model checking to estimate it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 51 publications
0
3
0
Order By: Relevance
“…SMC also effectively works for non-Markovian and hybrid formalisms, as evidenced by modes' support for stochastic hybrid automata, however LSS does not [30]. We have recently combined SMC for such models with RL, but used explicitly stored Q-functions and discretisation for learning [86]. PMC approaches for such models are the subject of active research, with simple approaches based on interval abstractions provided by the Modest Toolset today [53,54].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…SMC also effectively works for non-Markovian and hybrid formalisms, as evidenced by modes' support for stochastic hybrid automata, however LSS does not [30]. We have recently combined SMC for such models with RL, but used explicitly stored Q-functions and discretisation for learning [86]. PMC approaches for such models are the subject of active research, with simple approaches based on interval abstractions provided by the Modest Toolset today [53,54].…”
Section: Discussionmentioning
confidence: 99%
“…Modest Tools. RL with an explicit representation of the Q-function is implemented in the Modest Toolset's modes tool to find strategies in non-linear stochastic hybrid automata, where classic PMC techniques cannot be applied due to the continuous nature of the state space [86].…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Continuous behavior that can be expressed by systems of ordinary differential equations can be simulated using an approximative approach [20,18], whereas piecewise-linear continuous behavior is simulated without approximation. HYPEG resolves discrete nondeterminism either probabilistically or using reinforcement learning to maximize or minimize the probability of a property [17,19], also in combination with a contract-based approach [2]. The tool is available at https://zivgitlab.uni-muenster.de/ag-sks/tools/HYPEG.…”
Section: Participating Tools and Frameworkmentioning
confidence: 99%