2024
DOI: 10.4208/jml.230924
|View full text |Cite
|
Sign up to set email alerts
|

Approximation Results for Gradient Flow Trained Neural Networks

Gerrit Welper Gerrit Welper

Abstract: The paper contains approximation guarantees for neural networks that are trained with gradient flow, with error measured in the continuous L 2 (S d−1 )-norm on the d-dimensional unit sphere and targets that are Sobolev smooth. The networks are fully connected of constant depth and increasing width. We show gradient flow convergence based on a neural tangent kernel (NTK) argument for the non-convex optimization of the second but last layer. Unlike standard NTK analysis, the continuous error norm implies an unde… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 41 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?