2020
DOI: 10.1109/tnnls.2019.2951788
|View full text |Cite
|
Sign up to set email alerts
|

Realizing Data Features by Deep Nets

Abstract: This paper considers the power of deep neural networks (deep nets for short) in realizing data features. Based on refined covering number estimates, we find that, to realize some complex data features, deep nets can improve the performances of shallow neural networks (shallow nets for short) without requiring additional capacity costs. This verifies the advantage of deep nets in realizing complex features. On the other hand, to realize some simple data feature like the smoothness, we prove that, up to a logari… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
25
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 17 publications
(26 citation statements)
references
References 58 publications
1
25
0
Order By: Relevance
“…Our result, exhibited in Theorem 1, establishes a covering number estimate for deep nets with arbitrarily many hidden layers and tree structures. This result improves the estimate in Guo et al [29] by reducing the exponent of R,α,L in (6), i.e.,…”
Section: Advantages Of Deep Nets With Tree Structuressupporting
confidence: 80%
See 3 more Smart Citations
“…Our result, exhibited in Theorem 1, establishes a covering number estimate for deep nets with arbitrarily many hidden layers and tree structures. This result improves the estimate in Guo et al [29] by reducing the exponent of R,α,L in (6), i.e.,…”
Section: Advantages Of Deep Nets With Tree Structuressupporting
confidence: 80%
“…Using a similar approach, it was presented in Kohler and Krzyżak [34] and Lin [9] an upper bound estimate for deep nets with tree structures, five hidden layers and without the Liptchitz assumption (5) of the activation function. Recently, Kohler and Krzyzak [13] provided an estimate for covering numbers of deep nets with L-hidden layers with L ∈ N. Furthermore, covering numbers for deep nets with arbitrary structures and bounded parameters were deduced in Guo et al [29]. Our result, exhibited in Theorem 1, establishes a covering number estimate for deep nets with arbitrarily many hidden layers and tree structures.…”
Section: Advantages Of Deep Nets With Tree Structuresmentioning
confidence: 60%
See 2 more Smart Citations
“…For these and some other RBF's, existence and uniqueness of scattered data interpolation from the linear span of {f (x−x k ) : k = 1, · · · , }, for arbitrary distinct centers {x 1 , · · · , x } and for any ∈ N, are assured. The reason for the popularity of the multiquadric RBF is fast convergence rates of the interpolants to the target function [1], and that of the Gaussian RBF is that it is commonly used as the activation function for constructing radial networks that possess the universal approximation property and other useful features (see [21], [25], [35], [39], [40], [9]) and references therein). The departure of our paper from constructing radial networks is that since RBF's are radial functions, they qualify to be target functions for our general-purpose deep nets with general activation functions.…”
Section: Introductionmentioning
confidence: 99%