2021
DOI: 10.48550/arxiv.2109.05022
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Potential-based Reward Shaping in Sokoban

Zhao Yang,
Mike Preuss,
Aske Plaat

Abstract: Learning to solve sparse-reward reinforcement learning problems is difficult, due to the lack of guidance towards the goal. But in some problems, prior knowledge can be used to augment the learning process. Reward shaping is a way to incorporate prior knowledge into the original reward function in order to speed up the learning. While previous work has investigated the use of expert knowledge to generate potential functions, in this work, we study whether we can use a search algorithm(A*) to automatically gene… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 7 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?