2019
DOI: 10.3150/18-bej1046
|View full text |Cite
|
Sign up to set email alerts
|

Regularization, sparse recovery, and median-of-means tournaments

Abstract: We introduce a regularized risk minimization procedure for regression function estimation. The procedure is based on median-of-means tournaments, introduced by the authors in [10] and achieves near optimal accuracy and confidence under general conditions, including heavy-tailed predictor and response variables. It outperforms standard regularized empirical risk minimization procedures such as lasso or slope in heavy-tailed problems.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
44
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 38 publications
(44 citation statements)
references
References 16 publications
0
44
0
Order By: Relevance
“…A growing body of recent work has addressed the problem of constructing regression function estimators that work well even when some of the f (X) and Y may be heavy tailed, see Audibert and Catoni [4], Brownlees, Joly, and Lugosi [11], Catoni and Giulini [15]. Chichignoud [59,58], Mendelson [62], and Minsker [65].…”
Section: Median-of-means Tournaments In Regression Problemsmentioning
confidence: 99%
“…A growing body of recent work has addressed the problem of constructing regression function estimators that work well even when some of the f (X) and Y may be heavy tailed, see Audibert and Catoni [4], Brownlees, Joly, and Lugosi [11], Catoni and Giulini [15]. Chichignoud [59,58], Mendelson [62], and Minsker [65].…”
Section: Median-of-means Tournaments In Regression Problemsmentioning
confidence: 99%
“…In this article, we address this question by considering an alternative to M-estimators, called median-of-means. Several estimators based on this principle have recently been proposed in the literature [56,42,47,48,49,53,43]. To our knowledge, these articles use the small ball hypothesis [41,54] to treat problems of least square regression or Lipschitzian loss regression.…”
Section: Introductionmentioning
confidence: 99%
“…It follows from Lemma 6 that there exists α ≥ 1 and f 0 ∈ F such that P L f 0 =r 2 2 (γ) and f − f * = α(f 0 − f * ). According to (30), we have for every k ∈ {1, . .…”
Section: A4 Proof Of Theoremmentioning
confidence: 99%
“…This constraint can be relaxed by considering alternative estimators based on the "median-of-means" (MOM) principle of [37,9,18,1] and the minmax procedure of [3,5]. The resulting minmax MOM estimators have been introduced in [24] for least-squares regression as an alternative to other MOM based procedures [29,30,31,23]. In the case of convex and Lipschitz loss functions, these estimators satisfy the following properties 1) as the ERM, they are efficient under weak assumptions on the noise 2) they achieve optimal rates of convergence under weak stochastic assumptions on the design and 3) the rates are not downgraded by the presence of some outliers in the dataset.These improvements of MOM estimators upon ERM are not surprising.…”
mentioning
confidence: 99%
See 1 more Smart Citation