2022
DOI: 10.48550/arxiv.2205.15294
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient $Φ$-Regret Minimization in Extensive-Form Games via Online Mirror Descent

Abstract: A conceptually appealing approach for learning Extensive-Form Games (EFGs) is to convert them to Normal-Form Games (NFGs). This approach enables us to directly translate state-of-the-art techniques and analyses in NFGs to learning EFGs, but typically suffers from computational intractability due to the exponential blow-up of the game size introduced by the conversion. In this paper, we address this problem in natural and important setups for the Φ-Hedge algorithm-A generic algorithm capable of learning a large… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 32 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?