2021
DOI: 10.3390/e23010126
|View full text |Cite
|
Sign up to set email alerts
|

Information-Theoretic Generalization Bounds for Meta-Learning and Applications

Abstract: Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization ga… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
2

Relationship

2
8

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 31 publications
1
15
0
Order By: Relevance
“…Therefore, the distribution of the topologies is maximally diverse at some intermediate value of the interference radius. In line with this observation, metalearning is seen to profit from task diversity, which prevents meta-overfitting [21].…”
Section: Resultsmentioning
confidence: 57%
“…Therefore, the distribution of the topologies is maximally diverse at some intermediate value of the interference radius. In line with this observation, metalearning is seen to profit from task diversity, which prevents meta-overfitting [21].…”
Section: Resultsmentioning
confidence: 57%
“…In this setting, we directly analyze the effects of generic, domain-invariant, and targeted augmentations on OOD risk. Our analysis is related to work on metalearning that considers generalization to a meta-distribution (Chen et al, 2021a;Jose & Simeone, 2021); however, these analyses focus on adaptation to new tasks instead of out-of-domain generalization.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, (Denevi et al, 2019; studied algorithms which incrementally update the bias regularization parameter using a sequence of observed tasks. Another line of research is studying the meta-generalization gap, and finding bounds on it on average (Jose & Simeone, 2021;Rezazadeh et al, 2021) or with high probability (Pentina & Lampert, 2014;Amit & Meir, 2018;Rothfuss et al, 2021;Liu et al, 2021;Guan et al, 2022).…”
Section: Related Workmentioning
confidence: 99%