2012
DOI: 10.1155/2012/902139
|View full text |Cite
|
Sign up to set email alerts
|

Approximation Analysis of Learning Algorithms for Support Vector Regression and Quantile Regression

Abstract: We study learning algorithms generated by regularization schemes in reproducing kernel Hilbert spaces associated with anϵ-insensitive pinball loss. This loss function is motivated by theϵ-insensitive loss for support vector regression and the pinball loss for quantile regression. Approximation analysis is conducted for these algorithms by means of a variance-expectation bound when a noise condition is satisfied for the underlying probability measure. The rates are explicitly derived under a priori conditions o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 15 publications
0
17
0
Order By: Relevance
“…One need to point out that the proof of Theorem 2 is only applicable to the case q > 1. However, when q = 1, it is a special case of quantile regression and the same learning rates as those of Theorem 2 can be found in [17,18].…”
Section: The Covering Numbers Of Ballsmentioning
confidence: 73%
See 2 more Smart Citations
“…One need to point out that the proof of Theorem 2 is only applicable to the case q > 1. However, when q = 1, it is a special case of quantile regression and the same learning rates as those of Theorem 2 can be found in [17,18].…”
Section: The Covering Numbers Of Ballsmentioning
confidence: 73%
“…When fixing ǫ > 0, error analysis was conducted in [12]. Xiang, Hu and Zhou [17,18] showed how to accelerate learning rates and preserve sparsity by adapting ǫ. In [5], they discussed the convergence ability with flexible ǫ in an online algorithm.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…, Then Still we choose stepping-stone function , which leads to and . Reference [31] shows that , so we can follow the choice for ϵ and λ in Theorem 4 with to get the learning rate as where C̃ is a constant independent of m .…”
Section: Applicationsmentioning
confidence: 99%
“…is studied in [19]. Now, we restrict our attention to coefficient-based regularization schemes in a data dependent hypothesis …”
Section: Introductionmentioning
confidence: 99%