In this paper, we investigate a novel safe reinforcement learning problem with step-wise violation constraints. Our problem differs from existing works in that we consider stricter step-wise violation constraints and do not assume the existence of safe actions, making our formulation more suitable for safety-critical applications which need to ensure safety in all decision steps and may not always possess safe actions, e.g., robot control and autonomous driving. We propose a novel algorithm SUCBVI, which guarantees O( √ ST ) step-wise violation and O( √ H 3 SAT ) regret. Lower bounds are provided to validate the optimality in both violation and regret performance with respect to S and T . Moreover, we further study a novel safe reward-free exploration problem with step-wise violation constraints. For this problem, we design an (ε, δ)-PAC algorithm SRF-UCRL, which achieves nearly state-of-the-art sample complexity O(( S 2 AH 2 ε + H 4 SA ε 2 )(log( 1 δ ) + S)), and guarantees O( √ ST ) violation during the exploration. The experimental results demonstrate the superiority of our algorithms in safety performance, and corroborate our theoretical results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.