2024
DOI: 10.1109/tnnls.2023.3329513
|View full text |Cite
|
Sign up to set email alerts
|

Relative Entropy Regularized Sample-Efficient Reinforcement Learning With Continuous Actions

Zhiwei Shang,
Renxing Li,
Chunhua Zheng
et al.

Abstract: In this paper, a novel reinforcement learning (RL) approach, continuous dynamic policy programming (CDPP) is proposed to tackle the issues of both learning stability and sample efficiency in the current RL methods with continuous actions. The proposed method naturally extends the relative entropy regularization from the value function-based framework to the actor-critic (AC) framework of deep deterministic policy gradient (DDPG) to stabilize the learning process in continuous action space. It tackles the intra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
references
References 15 publications
0
0
0
Order By: Relevance