2023
DOI: 10.48550/arxiv.2301.02328
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Extreme Q-Learning: MaxEnt RL without Entropy

Abstract: Modern Deep Reinforcement Learning (RL) algorithms require estimates of the maximal Q-value, which are difficult to compute in continuous domains with an infinite number of possible actions. In this work, we introduce a new update rule for online and offline RL which directly models the maximal value using Extreme Value Theory (EVT), drawing inspiration from Economics. By doing so, we avoid computing Q-values using out-of-distribution actions which is often a substantial source of error. Our key insight is to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 12 publications
(19 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?