2016
DOI: 10.1142/s0219530515500050
|View full text |Cite
|
Sign up to set email alerts
|

Learning rates for the risk of kernel-based quantile regression estimators in additive models

Abstract: Additive models play an important role in semiparametric statistics. This paper gives learning rates for regularized kernel based methods for additive models. These learning rates compare favourably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel based quantile regression using the Gaussian radial basis function kernel, provided the assumption of an additive model is valid. Additionally, a concrete example is presented to show that a Gaus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(11 citation statements)
references
References 35 publications
0
11
0
Order By: Relevance
“…The results in this paper only requires thatd → ∞ as n → ∞, and there is no similar constraint on the number of ambient dimension d. In other words, our results cover the fixed dimensional case, which has been studied extensively; see [14,18,28] and among others. Interestingly, the result of the excess risk via the proposed method shows that the Lasso-type method is more favorable in term of prediction performance than sparsity recovery, since the latter requires strong incoherent conditions to guarantee sparsity recovery consistency; see the related details in [54].…”
Section: 1mentioning
confidence: 90%
See 1 more Smart Citation
“…The results in this paper only requires thatd → ∞ as n → ∞, and there is no similar constraint on the number of ambient dimension d. In other words, our results cover the fixed dimensional case, which has been studied extensively; see [14,18,28] and among others. Interestingly, the result of the excess risk via the proposed method shows that the Lasso-type method is more favorable in term of prediction performance than sparsity recovery, since the latter requires strong incoherent conditions to guarantee sparsity recovery consistency; see the related details in [54].…”
Section: 1mentioning
confidence: 90%
“…This is the optimal rate in the classical literature of statistical learning for finitely many predictors (see Theorem 3 of [42]). Under the framework of RKHS, Christmann and Zhou [14] gave the same learning rates of regularized kernel based methods for additive but fixed dimensional models. Nonetheless, their rates require a variance-expectation bound, different from our sparse 1 -regularized method.…”
Section: A Fast Oracle Inequalitymentioning
confidence: 98%
“…Suppose that and for each in ( 4 ) with . Here, is an unknown univariate function in a reproducing kernel Hilbert space (RKHS) associated with kernel and norm [ 30 , 31 ], and is an intrinsic subset with cardinality . This means each observation is generated according to: where , and satisfies the condition ( 2 ).…”
Section: Methodsmentioning
confidence: 99%
“…The mode-induced regression metric is robust to the non-Gaussian noise according to the theoretical and empirical evaluations [ 14 , 15 , 17 ]. The regularized penalty addresses the sparsity and smoothness of the estimator, which has shown promising performance for mean regression [ 2 , 29 , 30 , 31 ]. Therefore, different from mean-based kernel regression and additive models, the mode-based approach enjoys robustness and interpretability simultaneously due to its metric criterion and trade-off penalty.…”
Section: Introductionmentioning
confidence: 99%
“…If the predicted value y i is less than a distance ε away from the actual value t i , if |t i − y i | < ε. In figure 3, represents the region bound by y i ± ε for all i is called ε-insensitive region [9][10][11]. By modifying to the penalty function is that output variables which are lies outside the region are given one of two slack variable penalties depending on whether they lay above ξ + or below ξ − the region where ξ + > 0 and ξ − < 0 for all i…”
Section: Regression With ε-Insensitive Regionmentioning
confidence: 99%