2005
DOI: 10.1007/s10208-004-0155-9
|View full text |Cite
|
Sign up to set email alerts
|

Learning Rates of Least-Square Regularized Regression

Abstract: This paper considers the regularized learning algorithm associated with the leastsquare loss and reproducing kernel Hilbert spaces. The target is the error analysis for the regression problem in learning theory. A novel regularization approach is presented, which yields satisfactory learning rates. The rates depend on the approximation property and the capacity of the reproducing kernel Hilbert space measured by covering numbers. When the kernel is C ∞ and the regression function lies in the corresponding repr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

7
156
0

Year Published

2007
2007
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 233 publications
(163 citation statements)
references
References 31 publications
7
156
0
Order By: Relevance
“…Its asymptotic behavior, as N goes to ∞, has been studied in many recent works, see e.g. Smale and Zhou (2007) and Wu, Ying, and Zhou (2006), also in the context of NARX identification (De Nicolao & Trecate, 1999).…”
Section: Theorem 2 (Representer Theorem) If H Is a Rkhs The Minimizmentioning
confidence: 99%
“…Its asymptotic behavior, as N goes to ∞, has been studied in many recent works, see e.g. Smale and Zhou (2007) and Wu, Ying, and Zhou (2006), also in the context of NARX identification (De Nicolao & Trecate, 1999).…”
Section: Theorem 2 (Representer Theorem) If H Is a Rkhs The Minimizmentioning
confidence: 99%
“…They may be improved when some extra information about the kernel such as its regularity is available. See [14].…”
Section: This Implies Thatmentioning
confidence: 99%
“…The rates for this approximation in L 2 ρ X have been considered in [3,4,16,11,14], while the approximation in the space H K (hence in L ∞ ρ X by (1.2) and in C s by [17]) has been shown in [11]. (An early version of Theorem 1 below appeared in a late version of [11], and was subsequently removed.)…”
mentioning
confidence: 99%
“…It is known that one of the purposes of learning is to obtain f z through samples z and provide the consistency analysis of f z and f ρ . Kernel-based method is a popular way for this purpose, see [1,2,3,4,5,6,7].…”
Section: Introductionmentioning
confidence: 99%