2022
DOI: 10.48550/arxiv.2201.07395
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Overview frequency principle/spectral bias in deep learning

Abstract: Understanding deep learning is increasingly emergent as it penetrates more and more into industry and science. In recent years, a research line from Fourier analysis sheds lights into this magical "black box" by showing a Frequency Principle (F-Principle or spectral bias) of the training behavior of deep neural networks (DNNs) -DNNs often fit functions from low to high frequency during the training. The F-Principle is first demonstrated by one-dimensional synthetic data followed by the verification in high-dim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 52 publications
0
17
0
Order By: Relevance
“…In the condensation regime, a large network with condensation is effectively a network of only a few effective neurons, leading to an output function with low complexity (Bartlett and Mendelson, 2002), thus, may provide a possible explanation for good generalization performance of large NNs (Breiman, 1995;. The condensation suggests a bias towards simple function, which is consistent with the frequency principle that NNs tend to learn data by low-frequency function (Xu et al, 2019;Rahaman et al, 2019;Xu et al, , 2022. NNs have significant feature learning in the condensation regimes.…”
Section: Related Workmentioning
confidence: 78%
“…In the condensation regime, a large network with condensation is effectively a network of only a few effective neurons, leading to an output function with low complexity (Bartlett and Mendelson, 2002), thus, may provide a possible explanation for good generalization performance of large NNs (Breiman, 1995;. The condensation suggests a bias towards simple function, which is consistent with the frequency principle that NNs tend to learn data by low-frequency function (Xu et al, 2019;Rahaman et al, 2019;Xu et al, , 2022. NNs have significant feature learning in the condensation regimes.…”
Section: Related Workmentioning
confidence: 78%
“…Frequency loss. As studied in previous works [40,47,56] on the learning behavior from frequency domain, spectral bias of the deep neural networks is often inclined to low frequency functions. Besides, according to F-Principle [57], the fitting priority of a network to certain frequencies is various throughout the training, often in the low-to-high pattern.…”
Section: Pre-training Strategymentioning
confidence: 99%
“…For general non-linear NNs, empirical studies suggest that NNs have an implicitly regularization towards lowcomplexity function during the training (Arpit et al, 2017;Kalimeris et al, 2019;Goldt et al, 2020;Jin et al, 2020). For example, the frequency principle (Xu et al, 2019;Rahaman et al, 2019;Zhang et al, 2021;Xu et al, 2022) quantifies the implicit regularization of "simple solution" by showing that NNs learn the data from low to high frequency, i.e., implicit low-frequency regularization. as NNs in the linear regime studied in Luo et al (2021) and NTK regime studied in Jacot et al (2018), the low-frequency regularization of non-linear NNs can be exactly formulated by an data-independent function (Zhang et al, 2021;Luo et al, 2020) which explicitly shows that NNs pick a lowfrequency function from multiple solutions.…”
Section: Related Workmentioning
confidence: 99%