2022
DOI: 10.48550/arxiv.2202.13473
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Spectral Bias of Polynomial Neural Networks

Abstract: Polynomial neural networks (PNNs) have been recently shown to be particularly effective at image generation and face recognition, where high-frequency information is critical. Previous studies have revealed that neural networks demonstrate a spectral bias towards low-frequency functions, which yields faster learning of low-frequency components during training. Inspired by such studies, we conduct a spectral analysis of the Neural Tangent Kernel (NTK) of PNNs. We find that the Π-Net family, i.e., a recently pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…For example, [16] reveal how PNNs' architecture relates to polynomial factorization. Kileel et al [33] and Choraria et al [11] studies the expressive power of PNNs. Our work establish the connection between PNNs and many neural fields such as MFN [22] and BACON [36].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, [16] reveal how PNNs' architecture relates to polynomial factorization. Kileel et al [33] and Choraria et al [11] studies the expressive power of PNNs. Our work establish the connection between PNNs and many neural fields such as MFN [22] and BACON [36].…”
Section: Related Workmentioning
confidence: 99%
“…Doing the same thing to IPE results in worse performance as shown in RIPE. 11. 28.16 29.66 PNF 36.45 27.07 29.74 24.6 95.98 86.30 83.33 72.66 RIPE-sup 50.11 11.91 26.64 10.69 99.62 15.51 73.24 11.27 IPE-sup 34.76 26.8 30.00 18.41 90.97 84.81 89.65 65.7…”
mentioning
confidence: 99%
“…To improve the limitations of DNNs in learning high frequency components, empirical works use Fourier features explicitly as part of the input [43]. A parametrization of polynomial DNNs has also been shown to speed up the learning of higher frequency components in two-layer networks [44]. Different notions of spectral priors have also been empirically investigated in graph neural networks [22] and elsewhere [45].…”
Section: Related Workmentioning
confidence: 99%