We study learning algorithms generated by regularization schemes in reproducing kernel Hilbert spaces associated with anϵ-insensitive pinball loss. This loss function is motivated by theϵ-insensitive loss for support vector regression and the pinball loss for quantile regression. Approximation analysis is conducted for these algorithms by means of a variance-expectation bound when a noise condition is satisfied for the underlying probability measure. The rates are explicitly derived under a priori conditions on approximation and capacity of the reproducing kernel Hilbert space. As an application, we get approximation orders for the support vector regression and the quantile regularized regression.
Moving least-square (MLS) is an approximation method for data interpolation, numerical analysis and statistics. In this paper we consider the MLS method in learning theory for the regression problem. Essential differences between MLS and other common learning algorithms are pointed out: lack of a natural uniform bound for estimators and the pointwise definition. The sample error is estimated in terms of the weight function and the finite dimensional hypothesis space. The approximation error is dealt with for two special cases for which convergence rates for the total L 2 error measuring the global approximation on the whole domain are provided.
a b s t r a c tWe consider the multi-class classification problem in learning theory. A learning algorithm by means of Parzen windows is introduced. Under some regularity conditions on the conditional probability for each class and some decay condition of the marginal distribution near the boundary of the input space, we derive learning rates in terms of the sample size, window width and the decay of the basic window. The choice of the window width follows from bounds for the sample error and approximation error. A novelly defined splitting function for the multi-class classification and a comparison theorem, bounding the excess misclassification error by the norm of the difference of function vectors, play an important role.
Regularized empirical risk minimization using kernels and their corresponding reproducing kernel Hilbert spaces (RKHSs) plays an important role in machine learning. However, the actually used kernel often depends on one or on a few hyperparameters or the kernel is even data dependent in a much more complicated manner. Examples are Gaussian RBF kernels, kernel learning, and hierarchical Gaussian kernels which were recently proposed for deep learning. Therefore, the actually used kernel is often computed by a grid search or in an iterative manner and can often only be considered as an approximation to the "ideal" or "optimal" kernel. The paper gives conditions under which classical kernel based methods based on a convex Lipschitz loss function and on a bounded and smooth kernel are stable, if the probability measure P, the regularization parameter λ, and the kernel k may slightly change in a simultaneous manner. Similar results are also given for pairwise learning. Therefore, the topic of this paper is somewhat more general than in classical robust statistics, where usually only the influence of small perturbations of the probability measure P on the estimated function is considered.if |y − t| ≥ 0 for some τ > 0 for quantile regression. We refer for details and more examples of kernels.Definition 2.3. The loss function L is called Lipschitz continuous, if there exists a constant |L| 1 < ∞ such that |L(x, y, t 1 ) − L(x, y, t 2 )| ≤ |L| 1 |t 1 − t 2 | ∀x ∈ X , y ∈ Y, t 1 , t 2 ∈ R.(2.2) Assumption 2.4. Let L be a convex with respect to the last argument and Lipschitz continuous loss function with Lipschitz constant |L| 1 ∈ (0, ∞).Assumption 2.5. For all (x, y) ∈ X × Y, let L(x, y, ·) be differentiable and its derivative be Lipschitz continuous with Lipschitz constant |L ′ | 1 ∈ (0, ∞).The moment condition E P L(X, Y, 0) < ∞ excludes heavy-tailed distributions such as the Cauchy distribution and many other stable distributions used in financial or actuarial problems. We avoid the moment condition by shifting the loss with by the term L(x, y, 0). This trick is wellknown in the literature on robust statistics, see, e.g., Huber (1967), Christmann et al. (2009), and Christmann and Zhou (2016). Denote the shifted loss function of L by L ⋆ (x, y, t) := L(x, y, t) − L(x, y, 0), (x, y, t) ∈ X × Y × R.The shifted loss function L ⋆ still shares the properties of L specified in Assumption 2.4 and Assumption 2.5, see Christmann et al. (2009, Proposition 2), in particular, if L is convex, differentiable, and Lipschitz continuous with Lipschitz constant |L| 1 with respect to the third argument, then L ⋆ inherits convexity, differentiability and Lipschitz continuity from L with identical Lipschitz constant |L ⋆ | 1 = |L| 1 . Additionally, if the derivative L ′ satisfies Lipschitz continuity with Lipschitz constant |L ′ | 1 , so does (L ⋆ ) ′ with the identical Lipschitz constant |(L
In this paper we study conditional quantile regression by learning algorithms generated from Tikhonov regularization schemes associated with pinball loss and varying Gaussian kernels. Our main goal is to provide convergence rates for the algorithm and illustrate differences between the conditional quantile regression and the least square regression. Applying varying Gaussian kernels improves the approximation ability of the algorithm. Bounds for the sample error are achieved by using a projection operator, a varianceexpectation bound derived from a condition on conditional distributions and a tight bound for the covering numbers involving the Gaussian kernels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.