2002
DOI: 10.1109/18.971754
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of worst case errors in linear and neural network approximation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
81
0

Year Published

2003
2003
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 127 publications
(81 citation statements)
references
References 27 publications
0
81
0
Order By: Relevance
“…G-variation is a norm on the subspace {f ∈ X : f G < ∞} ⊆ X; for its properties see [15], [17], and [18]. In [16] and [18] it was shown that when G is an orthonormal basis of a separable Hilbert space, G-variation is equal to the l 1 -norm with respect to G, defined for f ∈ X as f 1,G = g∈G |f · g|.…”
Section: Rates Of Decrease Of Infima With Increasing Complexity Of Admentioning
confidence: 99%
See 4 more Smart Citations
“…G-variation is a norm on the subspace {f ∈ X : f G < ∞} ⊆ X; for its properties see [15], [17], and [18]. In [16] and [18] it was shown that when G is an orthonormal basis of a separable Hilbert space, G-variation is equal to the l 1 -norm with respect to G, defined for f ∈ X as f 1,G = g∈G |f · g|.…”
Section: Rates Of Decrease Of Infima With Increasing Complexity Of Admentioning
confidence: 99%
“…Theorem 3.1 will be used in the next section to investigate generalized Tikhonov well-posedness of (M, e C ) for admissible sets M computable by variable-basis functions and, as a particular case, by neural networks. [16], [17]. Sets span n G model situations in which admissible functions are represented as linear combinations of any n-tuple of functions from G, with unconstrained coefficients in the linear combinations.…”
Section: Conditions On M and C Guaranteeing Tikhonov Well-posedness Imentioning
confidence: 99%
See 3 more Smart Citations