2018
DOI: 10.48550/arxiv.1805.02777
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

What game are we playing? End-to-end learning in normal and extensive form games

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
25
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 15 publications
(27 citation statements)
references
References 0 publications
2
25
0
Order By: Relevance
“…In recent years, there has been a great deal of interest in creating structured neural network layers that define implicit relationships between their inputs and outputs. For instance, such layers have been created for SAT solving (Wang et al, 2019), ordinary differential equations (Chen et al, 2018), normal and extensive-form games (Ling et al, 2018), rigid-body physics (de Avila Belbute-Peres et al, 2018), sequence modeling (Bai et al, 2019), and various classes of optimization problems Donti et al, 2017;Djolonga & Krause, 2017;Tschiatschek et al, 2018;Wilder et al, 2018;Gould et al, 2019). (Interestingly, softmax, sigmoid, and ReLU layers can also be viewed as implicit layers (Amos, 2019), though in practice it is more efficient to use their explicit form.)…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, there has been a great deal of interest in creating structured neural network layers that define implicit relationships between their inputs and outputs. For instance, such layers have been created for SAT solving (Wang et al, 2019), ordinary differential equations (Chen et al, 2018), normal and extensive-form games (Ling et al, 2018), rigid-body physics (de Avila Belbute-Peres et al, 2018), sequence modeling (Bai et al, 2019), and various classes of optimization problems Donti et al, 2017;Djolonga & Krause, 2017;Tschiatschek et al, 2018;Wilder et al, 2018;Gould et al, 2019). (Interestingly, softmax, sigmoid, and ReLU layers can also be viewed as implicit layers (Amos, 2019), though in practice it is more efficient to use their explicit form.)…”
Section: Related Workmentioning
confidence: 99%
“…We consider the problem of predicting equilibria, given only contextual information, without knowing the players' cost functions. Although the players' cost functions are unknown, historical data pairs (d, x d ) can be utilized, consisting of contexts d and the resulting equilibrium x d [9,34,36,37,57]. This work proposes a new framework: Nash Fixed Point Networks (N-FPNs).…”
Section: Attributementioning
confidence: 99%
“…Thus, by construction, N-FPNs are implicit-depth models -neural networks containing an arbitrary number of layers [5,21,22,53] -and the operator weights can be efficiently trained using Jacobian-free backpropagation [21]. Importantly, N-FPNs can also avoid direct, costly projections onto sets of constraints for players' actions, which is the computational bottleneck of multiple prior works [34,36,37].…”
Section: Attributementioning
confidence: 99%
See 2 more Smart Citations