2024
DOI: 10.14569/ijacsa.2024.0150171
|View full text |Cite
|
Sign up to set email alerts
|

Experience Replay Optimization via ESMM for Stable Deep Reinforcement Learning

Richard Sakyi Osei,
Daphne Lopez

Abstract: The memorization and reuse of experience, popularly known as experience replay (ER), has improved the performance of off-policy deep reinforcement learning (DRL) algorithms such as deep Q-networks (DQN) and deep deterministic policy gradients (DDPG). Despite its success, ER faces the challenges of noisy transitions, large memory sizes, and unstable returns. Researchers have introduced replay mechanisms focusing on experience selection strategies to address these issues. However, the choice of experience retent… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 39 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?