2022
DOI: 10.48550/arxiv.2203.03597
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fast rates for noisy interpolation require rethinking the effects of inductive bias

Abstract: Good generalization performance on high-dimensional data crucially hinges on a simple structure of the ground truth and a corresponding strong inductive bias of the estimator. Even though this intuition is valid for regularized models, in this paper we caution against a strong inductive bias for interpolation in the presence of noise: Our results suggest that, while a stronger inductive bias encourages a simpler structure that is more aligned with the ground truth, it also increases the detrimental effect of n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…Because of its simplicity and well-develped theory in classical machine learning [58,15,16], sparse modeling is often used to provide theoretical understanding of modern large and over-parameterized models. This include works on implicit regularization [59,60,61,62,63], nonconvex optimization [64,65], noise interpolators [66,67,68], etc. However, the aforementioned work uses sparsity as a testbed or toy model to gain insights, without implication of existence of sparsity in DNNs.…”
Section: Sparsity For Robustnessmentioning
confidence: 99%
“…Because of its simplicity and well-develped theory in classical machine learning [58,15,16], sparse modeling is often used to provide theoretical understanding of modern large and over-parameterized models. This include works on implicit regularization [59,60,61,62,63], nonconvex optimization [64,65], noise interpolators [66,67,68], etc. However, the aforementioned work uses sparsity as a testbed or toy model to gain insights, without implication of existence of sparsity in DNNs.…”
Section: Sparsity For Robustnessmentioning
confidence: 99%