2011
DOI: 10.1007/s10444-011-9238-8
|View full text |Cite
|
Sign up to set email alerts
|

Concentration estimates for learning with unbounded sampling

Abstract: The least-square regression problem is considered by regularization schemes in reproducing kernel Hilbert spaces. The learning algorithm is implemented with samples drawn from unbounded sampling processes. The purpose of this paper is to present concentration estimates for the error based on 2 -empirical covering numbers, which improves learning rates in the literature.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(18 citation statements)
references
References 29 publications
0
18
0
Order By: Relevance
“…It is widely used in [19,23,18,3] and etc., more detailed analysis can be found in [25,26]. More recent references [5,8,7,16] use 2 -empirical covering number to obtain a sharper upper bound for the excess generalization error E(f z,λ ) − E(f ρ ).…”
Section: Resultsmentioning
confidence: 98%
See 2 more Smart Citations
“…It is widely used in [19,23,18,3] and etc., more detailed analysis can be found in [25,26]. More recent references [5,8,7,16] use 2 -empirical covering number to obtain a sharper upper bound for the excess generalization error E(f z,λ ) − E(f ρ ).…”
Section: Resultsmentioning
confidence: 98%
“…Here we will follow the work of [5]. Since the functions f s and f z,λ vary while the sample size m is different, we need a concentration inequality for a set of functions like in [20].…”
Section: Sample Errormentioning
confidence: 99%
See 1 more Smart Citation
“…One usually assumes that the output data are uniformly bounded [4,17,18,26,27]. In recent years, learning with unbounded output data has started gaining attention [3,8,12,23,24]. In [12], capacity-dependent error bounds and learning rates have been derived for least square regularized regression learning with unbounded sampling by means of an integral operator approach introduced in [17].…”
Section: Introductionmentioning
confidence: 99%
“…In [23], a sample error bound has been deduced for the ERM learning algorithm with unbounded sampling by a covering number argument. In [8], concentration estimates for least square regression algorithms have been presented by l 2 -empirical covering numbers. *Corresponding author.…”
Section: Introductionmentioning
confidence: 99%