2021
DOI: 10.48550/arxiv.2109.09710
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Understanding neural networks with reproducing kernel Banach spaces

Abstract: Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer neural networks of possibly infinite width. Further, we show that, for a suitable class of ReLU activation fu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
13
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(14 citation statements)
references
References 34 publications
1
13
0
Order By: Relevance
“…As second example, we strengthen the representation results for 2-layer ReLU networks established by Parhi and Nowak [18] and Bartolucci et al [1], which builds up on the univariate case investigated in [25]. The involved norm was also studied from a theoretical point of view in [17].…”
Section: Introductionsupporting
confidence: 80%
See 4 more Smart Citations
“…As second example, we strengthen the representation results for 2-layer ReLU networks established by Parhi and Nowak [18] and Bartolucci et al [1], which builds up on the univariate case investigated in [25]. The involved norm was also studied from a theoretical point of view in [17].…”
Section: Introductionsupporting
confidence: 80%
“…Compared to previous results in the literature [1,18], this leads to a stronger characterization of the solution set S together with a nice and elegant proof.…”
Section: Radon Splinesmentioning
confidence: 69%
See 3 more Smart Citations