2022
DOI: 10.48550/arxiv.2205.10629
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

User-Interactive Offline Reinforcement Learning

Abstract: Offline reinforcement learning algorithms still lack trust in practice due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. At the same time, offline RL algorithms are not able to tune their most important hyperparameter -the proximity of the learned policy to the original policy. We propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby overcoming both of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 26 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?