2019
DOI: 10.1007/s00440-019-00931-3
|View full text |Cite
|
Sign up to set email alerts
|

Robust statistical learning with Lipschitz and convex loss functions

Abstract: We obtain estimation and excess risk bounds for Empirical Risk Minimizers (ERM) and minmax Median-Of-Means (MOM) estimators based on loss functions that are both Lipschitz and convex. Results for the ERM are derived under weak assumptions on the outputs and subgaussian assumptions on the design as in [2]. The difference with [2] is that the global Bernstein condition of this paper is relaxed here into a local assumption. We also obtain estimation and excess risk bounds for minmax MOM estimators under similar a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 30 publications
(39 citation statements)
references
References 41 publications
0
39
0
Order By: Relevance
“…Regularized empirical risk minimization is the most widespread strategy in machine learning to estimate f * . There exists an extensive literature on its generalization capabilities [56,31,30,35,18]. However, in the past few years, many papers have highlighted its severe limitations.…”
Section: G Chinotmentioning
confidence: 99%
See 2 more Smart Citations
“…Regularized empirical risk minimization is the most widespread strategy in machine learning to estimate f * . There exists an extensive literature on its generalization capabilities [56,31,30,35,18]. However, in the past few years, many papers have highlighted its severe limitations.…”
Section: G Chinotmentioning
confidence: 99%
“…Based on the previous works [18,16,17, 1], we study both ERM and RERM for regression problems when the penalization is a norm and the loss function is simultaneously convex and Lipschitz (Assumption 3) and show that:…”
Section: Our Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…For more complex problems in which both the covariates and noise can be (i) heavy-tailed and/or (ii) adversarially contaminated, the estimator obtained by minimizing a robust loss function is still sensitive to outliers in the feature space. To achieve robustness in both feature and response spaces, recent years have witnessed a rapid development of the "median-of-means" (MOM) principle, which dates back to [40] and [22], and a variety of MOM-based procedures for regression and classification in both low-an high-dimensional settings [12,13,26,27,33,35]. We refer to [34] for a recent survey.…”
Section: Related Literaturementioning
confidence: 99%
“…Note that Condition 4.1 excludes some important Lipschitz continuous functions, such as the check function for quantile regression and the hinge loss for classification, which do not have a local strong convexity. The recent works [1] and [12,13] established optimal estimation and excess risk bounds for (regularized) empirical risk minimizers and MOM-type estimators based on general convex and Lipschitz loss functions even without a local quadratic behavior. Our work complements the existing results on 1 -regularized ERM by showing oracle properties of nonconvex regularized methods under stronger signals.…”
Section: Extension To General Robust Lossesmentioning
confidence: 99%