2021
DOI: 10.1093/imaiai/iaab004
|View full text |Cite
|
Sign up to set email alerts
|

Excess risk bounds in robust empirical risk minimization

Abstract: This paper investigates robust versions of the general empirical risk minimization algorithm, one of the core techniques underlying modern statistical methods. Success of the empirical risk minimization is based on the fact that for a ‘well-behaved’ stochastic process $\left \{ f(X), \ f\in \mathscr F\right \}$ indexed by a class of functions $f\in \mathscr F$, averages $\frac{1}{N}\sum _{j=1}^N f(X_j)$ evaluated over a sample $X_1,\ldots ,X_N$ of i.i.d. copies of $X$ provide good approximation to the expectat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 57 publications
0
4
0
Order By: Relevance
“…The first problem in (2.13) is known as empirical risk minimization [41,56]; the second is its regularized counterpart, and its minimizer, denoted by L(N,R) , is the focus of our discussion. These problems are meant to approximate the best linear estimator given infinite data, denoted by L, which is a solution to inf{R ∞ (L) : L linear and measurable}; this agrees with solutions…”
Section: Bayesian Inversionmentioning
confidence: 99%
See 1 more Smart Citation
“…The first problem in (2.13) is known as empirical risk minimization [41,56]; the second is its regularized counterpart, and its minimizer, denoted by L(N,R) , is the focus of our discussion. These problems are meant to approximate the best linear estimator given infinite data, denoted by L, which is a solution to inf{R ∞ (L) : L linear and measurable}; this agrees with solutions…”
Section: Bayesian Inversionmentioning
confidence: 99%
“…In our infinite-dimensional setting, it is also interesting to see from Theorem 3.7 that so-called "fast rates" for the excess risk (i.e., faster than N −1/2 , see [41]) may be attained by the posterior operator estimator in certain parameter regimes. The usual statistical learning theory techniques based on bounding suprema of empirical processes typically yield "slow" N −1/2 rates or worse (see, e.g., [56]).…”
Section: Bounds For Statistical Learning Functionalsmentioning
confidence: 99%
“…Other robust mean estimators might also be employed to get another robust version of empirical risk minimization. For example, Mathieu and Minsker [10] used Minsker's estimator [11] for robust empirical risk minimization and gave high-confidence bounds for excessive risk (see Mathieu and Minsker [10] for details).…”
Section: Applicationmentioning
confidence: 99%
“…MoM estimators are not only insensitive to outliers, but are also equipped with exponential concentration results under the mild condition of finite variance (Lugosi and Mendelson, 2019;Lerasle, 2019;Laforgue et al, 2019). Recently, near-optimal results for mean estimation (Minsker, 2018), classification , regression (Mathieu and Minsker, 2021;Lugosi and Mendelson, 2019), clustering (Klochkov et al, 2020;Brunet-Saumard et al, 2022), bandits (Bubeck et al, 2013) and optimal transport (Staerman et al, 2021) have been established from this perspective.…”
Section: Introductionmentioning
confidence: 99%