2017
DOI: 10.48550/arxiv.1703.11008
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
109
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 64 publications
(113 citation statements)
references
References 0 publications
4
109
0
Order By: Relevance
“…PAC generalization bounds have been used in robotics [40] and controls [41,42] for providing guarantees on learned models or controllers with low dimensionality. The PAC-Bayes framework [9] is a specific family of bounds in generalization theory that have recently been successful in providing generalization bounds for deep neural networks (DNNs) [43,44]. In our previous work, we developed the PAC-Bayes Control framework [45,3] for synthesizing control policies that provably generalize to novel environments.…”
Section: Related Workmentioning
confidence: 99%
“…PAC generalization bounds have been used in robotics [40] and controls [41,42] for providing guarantees on learned models or controllers with low dimensionality. The PAC-Bayes framework [9] is a specific family of bounds in generalization theory that have recently been successful in providing generalization bounds for deep neural networks (DNNs) [43,44]. In our previous work, we developed the PAC-Bayes Control framework [45,3] for synthesizing control policies that provably generalize to novel environments.…”
Section: Related Workmentioning
confidence: 99%
“…Importantly, our goal in this work is to learn switching policies, which, given a dataset of environment instances, generalize with provable guarantees to novel environments. To achieve this, we will utilize PAC Bayes theory, which is known to provide strong generalization bounds in supervised learning [26], [27].…”
Section: Learning Provably Generalizable Switching Policiesmentioning
confidence: 99%
“…Meanwhile, based on information-theoretic metrics, one can analyze general classes of updates and models, e.g., stochastic iterative algorithms for non-convex objectives, hence applicable to deep learning. It has been shown that the information theoretical bounds are non-vacuous and closely related with the real generalization error even in deep learning [14,26,9,39].…”
Section: Introductionmentioning
confidence: 99%