2024
DOI: 10.21203/rs.3.rs-3991723/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improving Deep Deterministic Policy Gradient with Compact Experience Replay

Daniel Neves,
Lucila Ishitani,
Zenilton Patrocínio

Abstract: Experience Replay (ER) improves data efficiency in Deep Reinforcement Learning by allowing the agent to revisit past experiences that could contribute to the current policy learning. A recent method called COMPact Experience Replay (COMPER) seeks to improve ER by reducing the required number of experiences for agent training regarding the total accumulated rewards in the long run. This method can approximate good policies on Atari 2600 games on the Arcade Learning Environment (ALE) from a considerably smaller … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 43 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?