2016
DOI: 10.3389/frobt.2015.00039
|View full text |Cite
|
Sign up to set email alerts
|

Grammars for Games: A Gradient-Based, Game-Theoretic Framework for Optimization in Deep Learning

Abstract: Deep learning is currently the subject of intensive study. However, fundamental concepts such as representations are not formally defined -researchers "know them when they see them" -and there is no common language for describing and analyzing algorithms. This essay proposes an abstract framework that identifies the essential features of current practice and may provide a foundation for future developments. The backbone of almost all deep learning algorithms is backpropagation, which is simply a gradient compu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 58 publications
0
2
0
Order By: Relevance
“…An interesting consequence of the main result is corollary 2 which provides a compact description of the weights learned by a neural network via the signal underlying correlated equilibrium. More generally, neural nets are a basic example of a game with a structured communication protocol (the path-sums) which determines how players interact [44]. It may be fruitful to investigate broader classes of structured games.…”
Section: Summary Of Contributionmentioning
confidence: 99%
“…An interesting consequence of the main result is corollary 2 which provides a compact description of the weights learned by a neural network via the signal underlying correlated equilibrium. More generally, neural nets are a basic example of a game with a structured communication protocol (the path-sums) which determines how players interact [44]. It may be fruitful to investigate broader classes of structured games.…”
Section: Summary Of Contributionmentioning
confidence: 99%
“…There has been a rising interest in game-theoretic analysis since the landmark Generative Adversarial Network [22]. By framing networks into a two-player competing game, prevalent efforts have been spent on studying its convergence dynamics [23] and effective optimizers to find stable saddle points [24,25], Notably, our layer-as-player formulation has appeared in Balduzzi [26] to study the signal communication implied in the Back-propagation, yet without any practical algorithm being made.…”
Section: Introductionmentioning
confidence: 99%